var/home/core/zuul-output/0000755000175000017500000000000015136636116014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136652103015473 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000321534115136651761020272 0ustar corecoreS{ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD J~F"mv?_eGbuuțx{w7ݭ7֫~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}kDR|gv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYuȓE$ޠYx3JtQJFjc 9G8MOY:GTMce0hTYF;B6@ c$Ⱦ֠N+fD>%vz_. o~I|3j dF{ "IΩ?PF~J~ ` 17ׅwڋًM)$Fiqw7Gt<LJ|iZ~hal\t2Hgb*t--ߖ|Hp(-J C?>:zR{܃ lM6_OފߍO1nԝG?ƥF%QV5pDVHwԡ/.2h{qۀK8yUOdssdMvw`21ɻ]/ƛ"@8(PN_,_0;o_x+Vy<h\dN9:bġ7 -Pwȹl;M@n̞Qj_P\ Q]GcPN;e7Vtś98m1<:|a+.:a4nՒ,]LF0);I$>ga5"f[B[fhTRk׿kb8_b|r wFuRI%T۩Ѭza\_/2vw>- MR9z_Z;57xh|_/CWuU%v[_((G yMi@'3Pmz8~Y >hl%}Р`sMC77Aztԝp ,}Nptt%q6& ND lM;ָPZGa(X(2*91n,50/mx'})')SĔv}S%xhRe)a@r AF' ]J)ӨbqMWNjʵ2PK-guZZg !M)a(!H/?R?Q~}% ;]/ľv%T&hoP~(*טj=dߛ_SRzSa™:']*}EXɧM<@:jʨΨrPE%NT&1H>g":ͨ ҄v`tYoTq&OzcP_k(PJ'ήYXFgGہwħkIM*򸆔l=q VJީ#b8&RgX2qBMoN w1ђZGd m 2P/Ɛ!" aGd;0RZ+ 9O5KiPc7CDG.b~?|ђP? -8%JNIt"`HP!]ZrͰ4j8!*(jPcǷ!)'xmv>!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8/0%X8Ua0NET݃jYAT` &AD]Ax95mvXYs"(A+/_+*{b }@UP*5ì"M|܊W7|}N{mL=d]' =MS2[3(/hoj$=Zm Mlh>P>Qwf8*c4˥Ęk(+,«.c%_~&^%80=1Jgͤ39(&ʤdH0Ζ@.!)CGt?~=ˢ>f>\bN<Ⱦtë{{b2hKNh`0=/9Gɺɔ+'Х[)9^iX,N&+1Id0ֶ|}!oѶvhu|8Qz:^S-7;k>U~H><~5i ˿7^0*]h,*aklVIKS7d'qAWEݰLkS :}%J6TIsbFʶ褢sFUC)(k-C"TQ[;4j39_WiZSس:$3w}o$[4x:bl=pd9YfAMpIrv̡}XI{B%ZԎuHvhd`Η|ʣ)-iaE';_j{(8xPA*1bv^JLj&DY3#-1*I+g8a@(*%kX{ Z;#es=oi_)qb㼃{buU?zT u]68 QeC Hl @R SFZuU&uRz[2(A1ZK(O5dc}QQufCdX($0j(HX_$GZaPo|P5q @3ǟ6 mR!c/24مQNֆ^n,hU֝cfT :):[gCa?\&IpW$8!+Uph*/ o/{")qq҈78݇hA sTB*F$6 2C` |ɧJ~iM cO;m#NV?d?TCg5otޔC1s`u.EkB6ga׬9J2&vV,./ӐoQJ*Dw*^sCeyWtɖ9F.[-cʚmD (QMW`zP~n"U'8%kEq*Lr;TY *BCCpJhxUpܺDoGdlaQ&8#v| (~~yZ-VW"T- 0@?lm$K/$s_. WM]̍"W%`lO2-"ew@E=%VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:2JޚݶKd7UZihRk71VDqiގ\<:Ѓ3"gJJčE&>&EI|I˿j2ǯɘCGOa9C1L ={fm&'^tigk$DA' elW@Tiv{ !]oBLKJO*t*\n-iȚ4`{x_z;j3Xh ׄ?xt.o:`x^d~0u$ v48 0_ | E"Hd"H`A0&dY3 ً[fctWF_hdxMUY.b=eaI3Z=᢬-'~DWc;j FRrI5%N/K;Dk rCbm7чsSW_8g{RY.~XfEߪg:smBi1 YBX4),[c^54Sg(s$sN' 88`wC3TE+A\.ԍל9 y{͝BxG&JS meT;{З>'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&AwMm[eG`̵E$uLrk-$_{$# $B*hN/ٟ".zɪ) ӓT)D:fci[*`cc&VhfFp佬)/Wdځ+ uR<$}Kr'ݔTW$md1"#mC_@:m P>DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v}/{&Ά+4*Iqt~L4Ykja?BH6!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Y?> (<2y. ">8YAC| w&5fɹ(ȊVã50z)la.~LlQx[b&Pĥx BjIKn"@+z'}ũrDks^F\`%Di5~cZ*sXLqQ$q6v+jRcepO}[ s\VF5vROq%mX-RÈlб 6jf/AfN vRPػ.6<'"6dv .z{I>|&ׇ4Ăw4 [P{]"}r1殲)ߚA 2J1SGpw>ٕQѱ vb;pV ^WO+į1tq61W vzZ U'=҅}rZ:T#\_:ď);KX!LHuQ (6c94Ce|u$4a?"1] `Wa+m𢛲`Rs _I@U8jxɕͽf3[Pg%,IR Ř`QbmүcH&CLlvLҼé1ivGgJ+u7Τ!ljK1SpHR>:YF2cU(77eGG\ m#Tvmە8[,)4\\=V~?C~>_) cxF;;Ds'n [&8NJP5H2Զj{RC>he:ա+e/.I0\lWoӊĭYcxN^SPiMrFI_"*l§,̀+ å} .[c&SX( ( =X?D5ۙ@m cEpR?H0F>v6A*:W?*nzfw*B#d[se$U>tLNÔ+XX߇`cu0:U[tp^}{>H4z 4 (DtH-ʐ?sk7iIbΏ%T}v}e{aBs˞L=ilNeb]nltwfCEI"*S k`u ygz[~S [j3+sE.,uDΡ1R:Vݐ/CBc˾] shGՙf 2+);W{@dlG)%عF&4D&u.Im9c$A$Dfj-ء^6&#OȯTgرBӆI t[ 5)l>MR2ǂv JpU1cJpրj&*ߗEЍ0U#X) bpNVYSD1౱UR}UR,:lơ2<8"˓MlA2 KvP8 I7D Oj>;V|a|`U>D*KS;|:xI/ió21׭ȦS!e^t+28b$d:z4 .}gRcƈ^ʮC^0l[hl"য*6 ny!HQ=GOf"8vAq&*țTOWse~ (5TX%/8vS:w}[ą qf2Lυi lm/+QD4t.P*2V J`\g2%tJ4vX[7g"z{1|\*& >Vv:V^S7{{u%[^g=pn]Y#&ߓTί_z7e&ӃCx;xLh+NOEp";SB/eWٹ`64F 2AhF{Ɩ;>87DǍ-~e;\26Lة:*mUAN=VޮL> jwB}ѹ .MVfz0Ïd0l?7- }|>TT%9d-9UK=&l&~g&i"L{vrQۻou}q}hn+.{pWEqws]]|/ǫ\}/J.MLmc ԗWrU}/Ǜ+sYn[ﯾeywyY]]¨Kpx c./mo;ߟRy*4݀wm&8֨Or4 &+Bs=8'kP 3 |}44S8UXi;f;VE7e4AdX-fS烠1Uܦ$lznlq"җ^s RTn|RKm;ԻZ3)`S!9| ?}m*2@"G{yZ${˪A6yq>Elq*E< NX9@: Ih~|Y4sopp|v1f2춓t$ėcju_RْHu&37MilwP"eFU8 RI֍~IA[qGI Xdu&=O6Ⱦ*L!SH6(}UI-%Kf7\ H`_<-xGTenUھ)WEEZ'Ѱ*nY7w۳Ky/yH"0Mo"[V 'R<*}UEv[xZIuSo!C*ܴS4*s֤r=c3쫺8AL g8x/t0.~ {Vpq̮{ s˫}@68/2,o;<%e#FwtJF؟A(%̋^_9/$kiEb's"xY\CS?7W'o $_WsMJut&pwKc0ԌRNShҚK,ŸbY{PXIhQۃ9%?/ϋAO%Wcmq?jyMphӁy:x'>ObS=lW" ҽ)5 \%{IKOo/>dGp^>ħp<$}}OvszKw,#Ɖk>-恳%i#yDi6rmu/TV!9CU|2]K/lŽ.7&R׾K4f8jE`-T gM#K? /!o<y\[w՗XoǏt'x7"BW-8VWlY"!ˏ ".LnW!\%?r/%k=gYLMB J e,.uvjv*$3%kfJ2xwRbahEV5z\jd(UT?3 Y!uǍC4RB=; Gy,44q5i.ce;S? PN'#Eq,>)(U.O be #EMa`=Z$B,"}Φ#ZfU\X@g XbE LFWϲzTAQ=m#%EFL֨.E_W2V_uG]GBfMS֯'tˤ4'z`-U}Kb/yEu2Je5ZBZ`SQ|MJM[{XB~Yf*>0ko2mF_`|<3s%JJFi۴~^V_=k5-@*_r vTlMϳ9sVb<:??eULT5jXzZY'g䕁}^Yط=>u8RdpJ "{dA#zFKK9v֩PtGy;&a.LiV\Hר-<-yczt>YߗtkolSV@hPPP~<䳨eA0:Ap%mPjș:ՠ~//E,b<Ɋ跣;Fǹl[qްR @;lVYf#+l]dz<N7( Sg Z<"Dw_ -8[ wphX%K!8 =?߽?[e`h_T4UV9Ϧ~}"@ l:Dʹsm@e.?`8VWϲm_vn>ܑP 5985b [>^xs3װQϦmx6ͦHp`,9AJ4z,Eiw԰Le59xmP]sw FA_mÊ͛ߎ((;ivT3 P  f !?En_Dv; DB~xMliv6rҡae[xz:?j) QieP8GTmy^IuTn}raTf%-iyS 6?SpD~2X0"{{]ŁEqH}9/@ҹ,99q8v GAvo=əj3 BT2IyXmƦ7b3>(>}8EQ@5 E`c-oBt8:\ 9Π *hܗc=U'Ki0XۦY}T<؈{#3eXn"U˜,p@mo>})Vhʮ?>>U6;6Jnq3ɗq*E]|t!t+K޾qzF$gZ_~{=,ƹ Qں 5X+Y*T!i@?{ -dq3mmG} gkY} ۚwaWQߺ8_cՈ$$kEphAeuWYU.3mj=U({"5#`p|wR*$~㨭&[n-UI+ڄVgİK#o,^ώ6$&8$k޺ ',lAgupH`?2SOL'/ZV.)W 1 5Enή>`}CWy8#dSHx{TfdrV8Wou )`sQ, f(ʾmހYyfyɊ,]ZIZ[_E; LrqnP3yWM۵9<n?%uEJmoNY?2][8N܁coPJk-4lX\?P q;†fLi=A\=gCz?O7LeZgy@XyQ-pZ`4r㶪HTT)U R#.R!~h{ϡ߽cRHcyIbSOjO+8W*%RaY6r *Vj*VY^;:3ہ9x{ˌ" q5y q,QVʦJ*CL0rJeyZ8J (Wlѧqd/,u0#Ps+Vʧv"25m-.(g)ZXV6µyV'bO+lʺtV< }`S,bL!&mV aEX ǔeXI)t Lodܦ8\tS\1\hj˦dX gI,6\ݿb@DfZw.> DJd͍274֝&B@j{[Dhj-/WCM󶞭pLTnx,e!]~K醔k0*/Dnu3<Ͼ!y6I@Ygݻ抰])6wkeErW+^؎ %)u¥F%! , Lċ,ucUbč6bEtY!&GCk 6fK,+SXrHl4.@"g+ºlkBAMC;;cB1tS"Ɗ%.KES7 j'rǎ;6%_xQ\nR=Cw걯͙>ϨOd}jH =ihǧM^I|(4gn[rֈu2I9Dh0`(QwIƱm@ч_[vukhV b"Zбm1UV,]Ϙ-p]) HưU4#׉0ܔ ieL2ʛ_ /7!Gko:Zޟf:bg둛7m~[ILцoYs3l7|P&ߔPC,U%;08 : ]rq<`"љˑ:d=Sz]GTml 6$yXr7/RACfhPՃ:FԣK݅F'odVm!dTmH|#-]ʾOa,Rїwm8 tC ګLm3@`.H9/ rf96χH܅&fy=Zܡ,<>rmY̥J$`TͰG58lfشa 睃ߴcnXOJhGRh*6l3/JqD ])ALx:%#@J(J bV%) DI#}hSj;5.)QMj)u&GPޅ6ܔ>rR8l 8]fAT.9=R|aSo͸1[` G1cE1CEM1#RG=Jwd0š|)fc(f# \j;UMn,>٨WQN;pbf9ּ{9ݕD 9fp_18.ٹ/4@<ˊBpkpR9Ot;\'ve~S#G 3 7 !0O3G"1)N؅cs%qiBlEhf7l@D]Ivy )ɔv$M"3#r8C>C㷎ߣd:>skGҫI=?S|BZΏG|>>#3yQs\z _p8F(C-kV`| ڑ<Ӭ~p >A"Wzc@W@cJX‚LAu:j ɷi4ub#nY=+-$P?Oӟ޼nY$z1nL$bI QDu~ͫ?=x7Okb{8ZUe^a,#?~KCnUt_MIg2Y(}R1~DE;]u-Q9].E6ʣITV~OYY] gbToT$^|i tj۞rp ]>`(Q݇wG<9هsdzã7 0%0n"1JTzy͎y/ j=^rknr;0˳^D45 }NPa ^g PW @{Dc Cm#O7d}[Pz8w Uw뒀Ch<`/t(vE@@AX}i~+`H|t#Fi+@Ȣ씭ﲞKA(nhш=X [Rc[~0@ab[} %@?!p?: ,,J- huu,8ۀz}80?G#)Y8hw;N0ݧ|<ў*X|/|o" p4]pC#)IgL#C=ztqhO1KГþF%!ZٝZٝQƆA4N>ێTI*x,M(\\7~}zfhhuHA_(?1*<̾$UYo(ZܥɞF0A_vI0OB:t{Cq94= 2hOs~ՃkqlJ4X=d= 3] 92۸Ή%Oƽ] ܬwLSڗ$ǣW+4J&VTeIMr+t܁!:ZPי΄0Ոj,J|sGh.+{ #3q4 %G{S`76ɉJ @Qp.JHq04d-:`m"gf ZG _̮#D7Eش(:[p5ZBι KuZ+̑*8r0n0Sj jo,/qXp8`f3XȦCqlQՎ< ynh벬d$g,9D!9jcLL+PJR~>󩒱c8VPRH*  ̐ ٜm1Ʒ}Աq(`BeNԇ4EO!J FUHFFy4AyK)0\o1EFTO.._N5<7"3fnd:n̑"?nʸfxDN5P&LL3-K>P<3oVe8x#LFtMp0-MNFs8=[_:oq=9˫Zx1r|B,q\epWhĨo,?v1#OG`#/Ncr.$vd̡$qb \7#EjٱmO,Y9`\ױ=EH13()/4ˍ|8u N{;mI3Nf+9CW|P2?-(m3H&?7$gpX`oۣ9̯ZER=,h[Py9mt4`pE-dB}w`*?f+J="խ(a gPJ+JgjKN6& ù!"[n[ pShG_gxk=π\qVpDEΆJ[P_of㼈S|-3y"ݬΖzsX /-fYSh/_Ӵ&&r)W-['ܛ Op, 7C$LaYjXY&tݟ xd)$2y ph\@  ~WDq D^','Q6rJA=w\>TZ}$L-RmT jʀQx(2z%^  N*kbG 0ip f.V`3qɤ X^BZj`V:ʓ,)ʦX6&\4q1Lp8:lㆰz{M_ҞUJ5d>$,P.&!2d 1ҝ)A,ԯS@0Ed6A ÐO Eѳ`LdcRs QK֫䥫JaJ) KERWM30!}&:B0JojZtt0|G(XlCnl5xew23WltMgP\Gy Ix~17PW&KG.*t'`>1ـ˔oxx"l'Ϥ"Uë|G9$ XL`> sCUAHޥ _UZ~a$9O%L;>_ Kܲ^T4&Nme=S)mj10ۺ%^ :7t1߳ϋkMSph pe t e{\}]*AzN?G;A|`YSH_iUv~T|]Gܦb%|Q$SPցU 1KIG]m^0,yG͋, }gI6$"zt SxZ|ٸtLb3 O|EOVeGI҆䷳ۨH0MF8t8e*4A:"4уǍI3HNmzCZ\Ia.eK%1D0/r{Zmr. jJ+ |~~*fE1ߪ AcɆ_)_yڲhdGfBVr =,ρt@ V@-R+h ;pꮔb*<Ѻ ϼt-RAk@2.W*cbQXU'TD"ٮ}`,gps&O̦ȃ+@Գq\/_f^%oʩ{N/t~FC1FCf0CP{ Jk!G f6+RK& r22{g)Vi@.zJVy/yFjg]!hl\o6p9ƪ%N|R\b\UfS$kWgzyJ,Lz".B'C0 U.T9,9U} nk /ʀE@RYs$̈GMP_:/qRL%ϐwMf!˺6}'[nlgYfgL3b'Q\#-Bnk1Z7YM X Ѻf?.EeGꌗɝƐZ]VW'@EtvGnl)phpɶa"WHhuRJtPԜ\Т:1|zLUxJG娬kyL [CGmj8vkۜE]th~O6[ޒNϫ~i>=czZ+<TV}k{ ?ms??nonz.Wbwn?O_'=vR2}ڳLч8 ;N)푹*k#ڻzkBd7l v bW^(Ȗpvw'8)ShsfoZgTfZLZ2U2j 53#øǶNE紨3ȋ 6*k3]bI,ET)80X]nb6sӠyPtku)SKUHbGHcK5+MY *ap[#|I|`%*Xu>&'TecuN½qJDK{/]puUB9 7X@Z$WN#wK^kI`TcaZb+_&3,Y:岍*`Hb0.c㡶lb%%˴ԁb ɻ/Ez!AHb ȉ\M1c 10ZH݁ĖC7CipU&E{+8͒/SM]om$0ؑM) Ɗ^n_(p\ QⓋHMrT}ApH:}y4$8arT у9Dxm,<7`Dx1iXrS qq#\VC a0b忷l׻CH0j1U%嫃XL%ftwd[*D1H:?Ə/kЄ[6tiX{,Li:WuzYzXft-V#$)U]'Q{[uMl.%8t8~d[hLdb$lhDxW~rf`g&\%h"ti6f F x^\OƒsHLn8LA712c8B`:eMM޼$h~U.Mx艥uaNtM=+$ЍN^Ԇ2Ӡ;kf@@a0bnpXA.gJ, Fjr5U> l)BYl#K:7M$9~p7+Xgml<-ߎCڻW5qQoݣG3-) a鐻W`Rv_7I/s[^Hc ɩ ܱ9Aыp_[1=Ҿ`${nyXe5E~ HPcQTܧkdu.bM`gc*JDC1.Cs$8 \xv5 ǂh\x#kV2LYΰ5υY$ַ-v G~Wb>gxv F_;Č?Q$)W<`WSlڮ{Cfٱ\,=)^r.5@3E\ύb Sؓa>drٖs;=ZHF؄x%t =>)xl1&#dGmʅ<2.:8xT)yDDcXbB ޞ6L'B =v6l};w]YhMF,aMIsi9+p3g^gpSGng"$M$i4@P5h0HGDβ/`Q3vH !験E NMTi 1[)aEc@-IUY zB٫0l]My}j$8~pDُ5|y #ˠ,Q*@t%kH*dr~U y8y)^2gQ8}UlLoV"nol}V gN(F1g]3};NCIn9Hy}ڂ֧#%3}J\Ur+Xgt\}`p<]:D>T g) S>sV?+{C̅ɥ¢fShAVd#0m_h̉3Wy$%PGn3xDXX2EYQT#霐뇍؃laMg.QLWWyʺʛֻĜ58 te!G@Iu$N(Ok:nk|bv~X85t5f 9֟&q0iby2$ Ժt̙ae"_Oi*b=C IgwuHpTf;U!*)õh<=/Mt\TkK#ƷY vf??{u-30q|1$8s^Q>05ZcW3tVگ;8Ic-Gz;Io;yXrAQwRZ=#,/qiY!OzOER\ۀvf>Dҷn{gF|ҩmMn^ 6cuAhZKwD+jMĩn;Q x=XQADRĒΤlC`}U'0'#|,)en9UMWT=Kn(㯊YuFyaV=+v/$8I2Yk#% 67PFy^\iyy՚T1̪Q^ rcՅMj3!=Jwo h$q&/s)b8B|=xBq䶵!oZ[A_W+:qXE^:6y(ʲ~m]5|g (qGb`0HD-}̛C8kFr "$U]ךB=xХ3'_>|.4 f=.VTlcVm%c's0΃s:G`;xХiS~eȚ+g/DpDhBLmvW>"#󭳺T%̼b$A h-P9;rp/LHҜL[=WI$ :wUc}Mx›pp[wq;U":j2ǭQ1EsIc3 A_#m+fs48nѧWv@'k]ɬs{/GA:9F/*tUeS~OśOkLMz)ȱBmx;)1M ]KER%&@*6K^U}15s/ Eqi<(6d¶+IsU DD|v$\*^\o; 6{鮪A}Q?+5%@/?O%bKL j/\7_w#f#/V56g/nrAf=ص!^:3~7ذL`ϩH Sy/WVcbc2^ ngb%YrrN02jԶE7 VKW͞0D]wun4Hj>jTsf f'bpp0]DJWWcUc~u< Nz18ƻ4=_[b>[.nw^uD,U%3k?Y-9yr4jx-aan o?͇"M btXm? ZP̓e+jh.-|3xՇs˧tԬcp`*2NaO%[mz*̸m%őJӭCȟ n4{>dʮ*|pw;ߩerSN`r|bu:ܒCN ~ߴj}岏~_ˋeO>o+mHʀf}A@ۛg!9~X/>,"%);EW53dRr"gk請M2Pf+4hZl3]@&Cd[x|K Nqy9[뾏 %~ f9N /XЩ*.L8 jxڼN;Oq0.9, GiE%wpr8i$0'uofR}DZ&kEhQWgL;doN;گsЙqY)7kFLל_˂ޤ8. {.FƢQw%N>GQId}5<uO;o>&1I%RM$6eCsVBuJxuq7ȪC`MD 8*W] (WkN˺OZ[wu- ^\ALq':|Be/ddS&N~|U'7 .W=^ukme;O%aL9,-M^'PeO+oPnpE@$'zSn&u*/5r(D8{"*ހ] jUیL>V!:fz4hxYsv} pQ^۳y5hW*kWRfGх w /(հg.a~>ziR[\z"u䪒wY9Ϊ)e4H ol; f txUڷnp$b[vW.8 l4, F?Niv A̍ f̩yZVbV[qŎGΧs W_:U Wb3E(`ҫ>.l;̮zoJ7CO8S M쪌s3#5Y 奫8\QbH6Nc VTm zdם 2@^,O^A349N.7>?`C_ܹo*NWSr7=Cs3N8)DOd5h3-E3Xr*ӖH˨n =Fp"$E!ɇaC3y,8{K-+1(Jq]7laal[5uvVYmAtAR SI|NJqn7mp8%m,[3e֠lAXR`3֮q4z[]ᥔOPKUHZ8TU֠Q-h5XֆzֵGqFZHMˊXnhK[Tt0#l10QKe:/INgoSVm.Z< $SO8Wͱ4Y?qdmgSYL *M8@X-*=)~#ε^ON)7OYLAeY/ s^qϪs?W,fn exG8 8꥛W P5iTC{RB0__Kp/\o|Nͩ"ILT6z y7 .x_Ahklta;,0)!8Tt/Ʒ)d4Q@'V=GS\xSS̲gޫ-O aNrM-(*rKĩx",J#_3 XUTP9{7(ķ[\P\Dr>ɗjZZ􇵴CԎ%O8)ڠ=YJɾwF <2ceJ[[+1F:ʹJUSDz'ݫ k&k\\agc{:6(՞(>eR~ ߍV]03=aH/>x^Yi_km˪<_=J60&~\Rf U-Ø(^>]W@q˫NN~*Q X nC55sP]cH_`fܗ$鑚M$GԬ6g$sYfVnּv>f|\Нv9 oڱLfh9'.jA^nZ Ow>^-j;/Q۳+Z 3Jk^̃QVYPKJ 'ɕ9 $l .hD{' b 9W77WK")ףqq&_`Z,Xn.&v&&R IoHdTPsj6)ʼnu6\IbԒRyj"?Y M*Nœjܮ" AuRY-{hAǽи-;csE+sڊu+Zy =9MRx"%:2^9ϣWTi!hve+nP#+nxPV(޵e,[&<.2QҠa#h/%+^hʼgxgt Zje4me4meXBx%e2O2_Yxx\T7/ˁ< 8|ƯlMjdMt21i1QL?(Aдg{iť)b?8QV2VT:&cDo.Ojo}B]hN޿Eɮ~rx zEO#MDRO8% ku8Ӄ>&p 'G>=G̃ >f[nLװےw},9C_ý_K^E?<%eݐӹ-7kﱆ5qy"}+|o(n8f) ez ErC@HљuAGgؚά%d3I¬ǏάNqetFqC f짂jpjGL[BX&Ta|.՗l!jdi4]ܕ/^1U;!l ˁ<hQ|~S/c*r;d_p =&KlW0ȔFpm;`ECKQXђ!;a.g:h(jZZ"$jD,x<9C9GG(OX 1"$OA` P!L8̓XNj@DE[Ce!&PU_Z[,8F?9zXbx9؂nc63JF H| r쎠{IW'0"#M^_fx^L#LrK9:bjs΁W8^GZC-,2mKy c߁%NT3dk)zp @B q!ִj<5F u ΘѶE|l{Zh-|3!Yga\d !*-9$,(m\Xp .9 - -YΤlխbiشA\[Җ2.kLYeRFҙ 9 i3<͵)p&!`@@)f]X! 6^؟tB&نXN[|"c2e>4e\r-t}>K=􈲌.Oe|xXҬ8^1ZKE}>?fw;J,WtҚD!ӌHMh;K 1)+EPsʩu;Nw )''mĔ% yJ)5ǧ\pT*!%FzG1 v'Vmb(UzO9~9ysϒ Es<,s ;oY^ߘ7E i8 -kiihMKm*$tb[ςxD M9Vs;r Dx26& V$%s V[JDD<& yiRb ѴdmbPt= 0 <"@%b}leD~>:o4qrM -`p΀6q dd! A93V%Ipz\}kqO |?^,[`7[4mo@QԌZJGψxd)&)UJގ|J$105l-v}eZV823е! 6 *Ġ%OSU̴hOSz``)'Y楎ށ$́@R ʷ${1p}qp.gb<ϰ:UTc-8{nF%< 0.=XW=lȟQ=XR`ғ4FKeSq}09ִWi +Eo4UgyfE\Y>K6Fu0v |z1B@D*q{C[L׺O@SͿ^7c.d7z e^DŽ,솨q@dYpl/Կ$߯EFЯ٥'9wfIFTlݎSqnRLZg =XM\5S\z2F)btR3N|GDbI)T1ɨ%"R>stqvJ^Jq[[RKMIY[-ek/wR"y 8C9mM4<$ڒJTXX|e`s`,\8t =@oƚYMQ\1Sf9iE*k+&l"‚`}`tm#r*FOxXFK> ?sukh)k]*D+HگOKK;YcYY-'Ml`>`~WYRE>}Lp N.7?"jRħp @Q%IҧVImPT􈇵`G<,$7h+nк/`폦{!J2X2\-wv.vuU}q!/Ý4ZZlZ,Rt}F4w8vpĩ~@h08=g۳1ScBF`ɍd|t*HܑY/~0sMXwh:ez #nL^;3"^T rJ3H\BE.hI' !V>ڍg?@e PA ֘=d77t !ɐ<|8G4)UH])IYB>hQo7$_.*)WWvAŦ2~r\ꟿ5LmHI^ϲg͙*10 1'Et,+>뱝{dlVY);X%D&CAY|!(TjJ) /A=e2,^wBsı]),ˀCYJTzi±Ww[ ,e"Ixj0j3".0dV-L4C/UX9Y54*1K~[D?Ǹ<6/$К p͘1gĕX5!Mq7'5rYM(,Jؘ[`.+iXrג{x,Pf|d񔡂L Bv3aC>R'B@x#c!?6(vM6-6n^y{ڕgsn >L} / ;^=/Uwor0_,q3X|z5|]J_tACQ h4N=hB |wۮ.ME e.o95AL+d쎆\Ђru5V!Onx54<|DDɕ9k+)6Nߞv'6yup40rt[hGHü,13޾ c"ƌ3|0^1n!U(u\#8:#f_-X+Ԛ:|6aJs!5D^FPzW07z}bB~iMWHR9@D-1c =k>nQ08s5 $+Mw@f[ǍGt)=ZY6?4̐%bfob! fc^%Fañpa~>ls eV:q\مނ:ceq2֮ckwrqײv.uQ0|.IzEp<Zݐo>8k;}5C^%bR+)ZRP3§_•Cq`%?ZeA37X<QL"fl3-ӡ !ذŸA+pU偂>\=l2-7uh_a4`ow>MID;5_\jR_r9*i<_-4`>JqbC.mRңQ޲_EW§!t ҕGbyOp鳨!L˳GDGze(WTVwb#K-aA^NCRLV}v<4i-&T$*nFo?I%(t[[t5,|F́_֞i֎&ct=?lG0S" &4%r1ԋP?̳J .>\lSJ0K:PSR{IyU8~P2^|e{1qh]]. } u ,-v4YU 'ә|_ STCB\Ǐ|>xf߾n!-Rs.n2s8Z-|[ a+^M%3c`󤰳, [V&o},%nbgkAX-O6SQ/dO[tz0Ki|#|C}%vFo> zrZ:j3C9Ud_T$ QrVl|?xA7`cۓ"H.&>rI 0 " 'XdcSNs-,AqPc(A94u66i7B/h,G]u9XӶ[&S4) g(Ӛi( r\y I^{ݛ4y1b[#Y(6晣VlZ/tZ6mn{&%x,o1 Q뒕i5/QǢ/#gnj_ն O 7 PW|7rb?!n)Jt?Fk]91ρK6z>/ϯg%djksܦ +یA#O7ں%X h5m*x",}Vn <؈m5 "B:w٤߂_‘M/eP݇] TX80j>P=̊<`~ɻu ,V)Ӯ$&j*e~$๓);~Q+/. $a&Qq# ϙ;$oG͚$&#n,4Zz +ͦqA%13D ^)*ݠ:5oKlƟN {X,)sYdfVKЁwT`>*Q$NM/1VGF! \Sr NXсwHޠ:5oKr)I2Ieg4962#py-ɛQ#ymI˯.ė4BqrLrp&-JP4)]oޱ F[ܨŵCR(lx"rj(G>s{Se괘דeȌ^΋Ue]P&ۑ\ar:3hn.2i`R]Vi*εזNѶ-Qvm.s_o*3ZE&j0$%!,;g݈{7|}#Y{m$HN}XdVV:B `i蜕ba C\6PCukdt}a jH#J[b +`LoMCT )S:RnGf{| m#lʈXN:umW1R, r)I'I P5)6V^ic # cK]L+$!]s.jsEd.ӽ'*8q, lJ *GUy^X,F:o]9eI kk+Cnp Wpf25Gq"b<<վ1UquKeۤAicس}>{*%!簖fGیslD7oE gpDI%)Y bʊ1tEhPEk=YJ}QZ2^|:_z8CFfU>MapZ METWIERKSCF x7Ps52ؤوZ ׬W9yiN2Q8NdA=E pYThtyK(P*v1f#S/-IxnQ6לEvv¯WYvNs:G}'VP]?Rg餐WE_ eM{((/+x]m+l| 7=W2retZmzq(ҖU>J 2 ȇ\G{S;gfu$ h!#cĝT׬gYK k HQ Iݽ/`,REWm╫_v[KYqQ4involgq6(ƴ?ڵ.M5O]k9!ЍPCG5bN~̰嵐V\D:B 8B+U]"+Ň8aA9C%.\v2_ C>T#6F%tuWLg!"` S\՗ǰ ԤN25Rf[LL]>6d@{jȀ!]_K$5ZyDaZ[vV4NP-dS!#ZtsNF*N5b+)`1&J:dUR9|t}֣R  ȽS,U~l,E>Xl'@{r`'g!'2,@ =i+c861EbYy"Fk|:}>f '<$|W`Z*#80 wV0;+:󫘋[JƑUMqgE`TV~Oay0CU.tF}>9BMoM萑3X@YuOSR@ Ǟ z=13Uwb.m{(1 W)pAe:39!FsBOجc[:]@H %cEFmסj[؜\FzҾss䌟t4eż̡FfPNG5V!Pi a! :(4sZ5vh{m|9$jr'5|I vcEK.9JEN~ ܾhdG9q iow (,/.q KPC[rKx]Gwc_sWפH5)2?oQ[5!Yoz0s;d426 E>P&#N8RP#`a"Ud%D  jGխɺXSX;9WCX^V7|gsvPɧj U3m Uqt?h5!x5ȇR:\K]V_\a)g9V h[*F\C<|iE)84AzC`QY#/CF3=E>Tz®cŧGgx (CFDKMVi>\}6<5~Im PYMFM.&O,3rdmﱃpE3Y>!*=(CFv lzW9jwz" .,1&'?Jm|lvŽê>_}Q: 3Lw4 {4diMxإMXn^ZpyqAFChPU.(45e[h%Œ %%9hqi[(e*ffeZ1fsbQh9bf(XVRB|UShN=`QtAcbȇjwoug>LC@<9boWIQ;rH}+ ԎD UjB41R6溶-"6_`44Җ+}UsBxZr3A}СȇwD?73 ̜W+x08ZM70Xd(m ό1V0[T@ 2r6Ԙbu;X5b+v]vϬ߼iNW5#U ۢJw4X]c_|vG .4yo<+޶Deg7 8-ǸDFCCٞ0V9uh:6&|9j3碲E)&j8ԥu͠laTV!T+PM1I1VD]c2Nd5+Fo׫M*T@{Ρ -2o 4ŋ~y{tG;d#S궴ҔyOqXxli#7fr30b:3퐑3@׌ٕҋT]p7;ftJ\%I3()%0[Ib$u테"ڜaxqn^_RJJ&u p!av&y) ֭[¿6y0b%.grs/TCF΁*[Sf$RrhnCc6!vl[oԂd@j,S|G4 `=E_(T|%_ϗ{]_GM pкs? CTvz>Wj~]cA#.7=fׂ(4,(t<2 _qR!r&t_wgv?PY|bUمπU(s/9P5YAlJqZMt.J Ah sa/DU×Vy U9k\g7]D69p,AvwPxZʠnC8hfjvpK~ξI|r{=j{Xmn2Ylj]r~~etOaNE)+b1):frGB*ihD='N/-(?lBܤ'iױ*]x9y%lR/~ b%.î/$| ~:Aί;xC ?nr n?ϟ/7ZV\sofzv$aF6nW *UhZ!wVP:cۋdlx|sik|z7q5߯w`^-8k4[rU8[ffq9.mn; lvz5(y?-`fzX:Zte|W‡*-g g+Ֆx{ft_#Lb pwXOGus3'|֩?-ycOo]8ٲ/{y[kO+Crֹѱz 6HUmf:<3@%K퀆߲` xXvg^hj^wGֻ<G~|Vdc$.Gf?jFu~xqRR1ct8g_NI??{H?rKd3 L!7dۊeI+=!jUl!XBK%V,XJ:> ^# њ?Wǽu2I6f c}zQ,#o 1e%OF0aT{7h}Ĭn\kOoh<*¹B|=2 zUGog׏P͆z ' N1X51PMq#)V8w&xڟJ*NؠwݔhK! # !"ڊ1Dd>WZl#^zY!"d'=:JﴀhCDo(C JSvgB~,6ICpWśE<3-6ޙX\ vY4?߿OSe#+A.5Ba־ ?nnJ׮LB:5~e5ocf^NfJm퍋QhF.Lm}F)E8g?p_"*WxARY&erSEC2DqY<-)J,2`j9}*qi"2.^Kq}(-gRq-W8Y|:ex-Z @*WO+ZׂLס.޼hGa^\qG7*QTu11#4x;e4~xwz9 VD Z1zGp_+KD!Vb ֙b*$҅J.i[y1X x/qnzBwf 䙀]C 72K2 Rv#"dǏa-QM.`!E~ dZk?tXhA^">|kdƣPRp8%LW0;@v>TE{gsnv~=bي1ɸb ݵZmsB EcD y@RR+⟕4)ʁ09YpCCn!>a * 5x>M n;(r-eΊCbP,^PnPLSjeKϸ}@|X\]2WAmGtKK=j_!)8B#Ob }J- Tɋ*dލ颪ə>A>4F1p!f 5:U"P+W7Ŗx_<C7V!U\K(Z9 t (`~nySQˋE_)Z4HyTs8*I &jUTG ݆/jfdV :Rh{Ah#Mٗ:]sA ~FiSuQLQ6u_D`m#W1A ť\L؀>2ZcYq,X42P#ʩ6ԝbpF7yIq=q|\I4NZD?g,;_ PD[1ATuN^NE%?q~/qp[{?T ~B [6 B̙5ځҖ:_!Rd;iR\vWklHNTSGӓ{cQt5*)Wș2/]o- Rgi/f>] êr@g[؞(l6A%U^ )P%K*AՕYu8H~zrvdBHCxIHES2u61inF suSXemn7-A+KAl Pa䦔{Ea6]c^20Oy͍aM' [JbOXv^o"b4m9 d〩`PۏVHDus%EtvA.-{㉸a& w|dq3jE4߮|sQ\d<-^PYl/u*[9w3Bf7G$m12rzA0fOs.c2J S?M@H✔+@"03 ;9 ^uO1'A2X-Qj֔*h!dj`BJM|>G6h `>!> p!4IȕXsϰP"n,ELX4+.$vg緡 =t0ұ/u#{1BVJ:&MPV qIَ;\Zη Ȗ$C { (^T'L{{~0NvZ$^<4Rg??/ 6iA%]as+2I3x ĶŹ؇i[,)c@f̘΀g-hOa۰7#:c~C?,շ$~# CaQzR?n]Olaw}l.rQWnvuy׍QϬ_ otrK^)Xӹ6K'03`F,ZT4N2ZI{B|.FMF3RH7;+>%}e&Sxe2ΊXL.27٥X/ū<,0Gw}}9""ңboBHv醉l[ZFc<\Jכ/~:{&~1Q򌩅̖M"oW<>7]a瘗{~cޖ;!cV!vh f<W$>4*yHMHÉNӉK}2!`^!Go* 4[7EF.N!( %Ko-nAL q,:Y*x X؉/#qMY)i{y$|*s<36M뵿oOb s)ά`^֋Nإ^cWAV}AsԺR#J+8!&2azY@0~$!>GwBRH26 R2 REZz2a%jw-}^|41u#kiYvVg>W-uFQ!3ަ1NrZ jifkw1pۇo#}ռs?%*E 7851V u:ow)Giysl Gwyr?}~n'K Q1AԼOOZ;7߇gY?۩{0zIh7O~º >0n>BӫWpP2,(=v,vz VqiB½8qJ&{%y2`jfјNoR!`› YU)SrC'!DwFS=3q6xJ6Y&E¥}>Į 7N9M#em]1UEwS)U6ۤ]웯OR嗾LWL[M>୭oV {$#o)d8U\ӾGh"LO0k5T*SaSi-jg4RPo+X MRĆ@"'ϸqVjxiQE>D9U!7o+LӺrCSGS|ԊVA'䴡2n!{mɨ+Ty*j_Z|m7PP*$݌Gn KGtdO %-[8ZU֦|Yo+I܈ t^R>Y_&;'xc2jP%0:rUU'.g  7#qwX*cj׺zP7E 1H}͔Kқ-͎|g9&OC~:7D` rB*@P|Gv슟&Q 'vBqsyW2DxC,S4?+Qό|ԣJi>ʔF醸*F|@7u=[EPu(^cz_&x(`f&U$zPE?+Z' ^^]mIxcs(ǶO]s^T]1-S m&]L9Sw殺F_B=KhS~ījgn|:M.9yR c1O tQoPU9PO[1@|[asfkP04"YĭP)@.:3gqF]N %Y v$p;A@Tgܶǒgf~EɯnKwnK,UTWz}<fpd #Ec#B SP sN0yRah6ZUk*:bF\'c:Jaؽ6{s2~V1$e=ˉE^{8x62c_^4E_b.E0Y\FkgN`tE(/EȰpN)^ *xG1@S&ELgE1u".ڌk40*YtFh;$TS"d;R\"MBs{z5Q E_\5 EGknTGݣdc䙲;PQ;"PE\ [0U;¨xYéÓAѨH/K9NU-++6O p,Gi Q IY-'aK:-G͕q{x rӫ-܃0Ksך9맸 ܅۝ c4>tawӾ:y_"iNmKh ՜\tsS M|sᣟ;JmjEEQBQٺH~OjzDmx 4MWa8? )FIz` f^R7]hRvkoaRT^U̳+5b.5EI%IK$̵S Oc#HRb.iU7<R&L)6 A SQcİsL_3&f|iZdW-#QKj(h9rF2q52FQĬS壉m>;p+?2Ə>*Bw&0Tj˄ a'hӻ$p[vf}놝ٲ6a6n2-5w 9R9*CF ݼ tWŭ/ޣeW!$A=oW|{3~z ٛɴGEu,i{ri=T1l,瘸-_z%^"^`0EnAs9L;<ECұ0ƽ7[\~CNї3Orq*-C9{Lp.o,ӓ,{4Hp?]73CBB3hTmușoAY^ qsۜ"\g. s^i4C# Y]hoqš0bO2AQ8 a Keiyfv4ŲЎG; x '`^#a1^y/fn'U˛fD jެw>bǕ_x2@E[?SQ}-&0r9~taO({El9*? DndLthnߣ+nk$^a-} mS.jߪk} ? Sou~clH'p`փ\[Aq^#ѣg«8o_/jt9||7Bl[NM|D,OsdJ-w5~zRÇ81sZ 쿉gM^6-`s^gK_ʖWA z^ y!u+YP Q]YP. @TtV1Q'}lvyVǼnc?GVu~3`vdmpIo?ǹҠB9҄f-i(\xgr!AGѿ`U:;n9_: ƜMH _B1\"N9ɵQ/zd (E__g]TߴQKG|KZ<heyF<ߣ*^!Ixo::Y7u6̓ʎ[ WmRQWj=!f:g8W_5GBߌ\?f̫oS L=*dhm PT' e6OH ek{wWE:ۗWKj;7o;CU")M g1rgV4;U띝mȜ3xya䥯'ƝV螫ℓGp) nJؙ4.'/x"*7"yo쁧D^w4xرD+9 o59ڀE\买SQ??[| @17i p0.D]YwnQe'dF]9Wy=?j M=gRpz6)vAEh e]jRJ짲@RQi#x7K(dXTs{&Lk˶}Aa(YzD Xt(rjU$t.]?܈SnD-#8z&>kW>%x~#18fNAq]"Q: # .gƸP3qׅ;IF=0#&-E\ ^v4%^)3A\"% t\ E}I =@ip xo0SC쁼#lޢ2 C^hscK` t $1}U^:KW;vl(>$Bk2gp#W +P}(&HAZ 3"KbLq8z&a]T_rH;_+cYRe+"{~l>*{?ew~ɡ;s{;Tz[/ϛuzQ1$Fä`]dwyނմ:`;6FwûP~ģ;MBQQ;*a4؈(S(CS9o)'Ƅ iGjNlQε:HA8O%/T +% ++9BE4m-qZG@:P.?x#J3Ckm`JQ!%FGpLl@$lAOqq|\ݟsG!XSĉj|G!5N!6gΉ@ Ҁ$v!B3q(O~pub":̐6T0O!\).CvC /l^@`zJ *XU#8z&N'}/ZFC`X̡bD0pea KArtڋn| C K0ތ@5eT_8$ⶉ㑟ė5fWK.REvw7]3qIel)w"rk5A\&oLi'1 7l:SkbLv~hvVTp 6DR)xa%({WLgt{(>UV1RpK&WAth)s*58#4O0^XwhJ[LbΗv18*Gii* >Hm>Mu+鞣GpLm޵6r#E0[lhlId ܇|X4R{<3A+^mIad;;yvb*GڝFfЧ#.>$һ'H9Rei(t̜1FVH1-`H.y%Қ}QV.>Fn<;ʻ2H7{Fc m$SQ#>h접9Rcņ & Qj+43Tzo{\qs*XR+RV c.eI su#諷>qLID:2 PidfI;Ijd̯94(e@%[ZO& &zh-7&"9fD%'r$Lt?ĺxjҮ7.dqur0(0t܃ fG Ie ûF @G="0k@OHnǭ1 6K`Bܗ'!)=(nς#P'00Hg- /K^oőp^_ۮӥ°0*-,+U L$eڮkMɓlfzY8cr|I`{pB`Z ;!).RUX(ɨ-kFfd_s[G{D= lIb~B4X 10 fx5RUCfB6 " zof>$n;%,6 a"!K'UB. d)(-;exIRei/FJA 3gYQY)Et_jTB0"o=}/9PQZ܅:;*JPХņuq %+AS}`ĪAy!/6q0>3<LƦHvsTtJ #mʡqG>s 9J*L(W@ ,tSB3]iŇԷ%9½h}ubwTfVq>c&4k>*SFQeqQ%Ue"ͧt 6jơ|n8=3CaFs7pzGiA#3s>C²'Kgf#GXðnM*3G}@pIqaA~y /*sE=K(RL*AN,W%rL^?\ۛ~_lz -N"krgy7;;Lꠑ9 )-\fT|׉d!8ASNQ]s86w%*H̡㔌 @p940 A#3s~m4<6uBg $)<ǛqQc;hdfNM ]5M⸬c2Egz̙#'b9q-.C {B^r%ꠑ93,3tW5V2W ^Vt#%q423'Cn+؊cjtҨ ؊;hdfNKt)y[' 'zY߇ Qy;ZffB1?^Z%5 U,ҕ#XaiXniH.Yy$Y9)|x|q9QfܽW{QI7:hfztlTedIP7ZOfWaZ=!P=2bE蠑9*6>znާmU̡8qM)QƏ/$9tS_(wl '\/P*-YSz-gKQ>Cre ) dIXA#3sAfZL-s~ّZ>qJ,F{xLbҙ"UR EB$D dB -a^ɒ* b6A󉞞$*S2J9{Ox*ob4/zʲ2s / 8U[VQ`z姫LoA٨Wl8tӻ 醜xbn=L<!̉ޮx|vF B:}?k'wWMWξzA'Ihbfh};l;en-[)F5ߖI*϶d_QX{ҋ^T.~B@L䝋a5t;{;FTwҌiܹv1xXr57_M쾺nh DGa<+p+heSn+:Zx1oոKxI+Vg556ix\b&[=҈ƏkE9BѾͻ¢XO^a1$E֎+7{=h5iA~Ane=ˋf_f!.̄~AU{{gg<"»_ ~Ew]!勳ތ 2๟LQ(,m2?dj +qNmkooPf)j/GH5;J7O{_ Pu R(L6~by?uޟ UaTc]ӮB++g)cȫ:մlT3`R~;4|<QLK j3f0jI !%+FV,?AF%l$ud_;K=ӷE sY*'oFg? ,? {g'I|G^T`*7CG)dz+&ŻǷ g1tBgׯA} ^X:'T~Ol/k]-]iTiAYmlt7ۯn7xd9ឍbY4׵z}MIԇen?VRxh%0FN vM? ?W=zaqsf6Y{Xxw[=뤵nԼQ/v@C;/SeGtt]%ls K%Xqg _ 1 æom%\,|{N!CQy3*{ǢC뛍z;Ʊ01D(febgAM;zG(xCNSCA/}OGDC4~.")k;Z2䫂#&BI )82RJbBsctmZr14j%<&-Oa2o v U%RYAFp?M =;[lyM(A Luk7!nL.zH($RGtFa*bw-RXӥ%F 5&%JPf*aw;EkYL[E5َ-0'`Qzx4~-Hy8E`)$k.\atmZodAo'˘ƱVj`т >yZ+uGgw+E[i`QʳҗONҜ WC5\c"s"L*)bvH ΐжWioP" Bj<+QAJԛ79=Ob}XuRHnozMw*!_EH*52%Lra25}6Kì0$'耠H:[1S݊4vnŘ/*c29)t{ԂM-ᇅ󋋣Pl`Ht$8?]vb''oalf~\_6XΦGXYZH<˛ 4=u!%!3cIU/@Ӡ(J,],,6t–Bͻt7?IVH!VxyElCڏJlGYF~:]8Lr;~;NGt~%_7S8:o8ڟhGPU8P kװfh>ش|"8&)BvF-{?L'/{Xߎ=zx6C^]΍: rjipNS."t/0c/0UéQI^D}Ѝ[>h8EPbw+cx[ :./w:x kSؠ Q"R3S/uٜ:w>WSqXE*ʫ'b$E*, mw=DUJƸCVSQ \TIrD+?`v?D<ݏhKTY4{[Q&SyG7<_vaҮTCu~TX~uA;5~v$7U+wS^%P y OX n(2j_z8i^AѾd+m\!KW8ƴ"01aMW*PJr_ 7o6WƿZL7gqj-"yC9s7*\7kMaNS馦"imK'8 avZ^?x? 0 P$v!w5h=(C5;er)baݫQD_9ܛ1ap9{ڲQX8=ܢz QQ)&'g!ŠE+EڋBCUjR Gא㩟V g+^8ZjrJM`ɃƸqw@Vm ,; }cK.֜:sL=»cj-[b˧,W{㽽5O0p$}. tf\8? =ApZwC8f ]SK͸A}O—R!tP|qԎQk"Ie(updK`/_Ⱦ_ūJ:޾>;4RPqU{jDy- p6J$8vFI iAEYNvzM8"IAZqA*.UiGi "gF ] @Ʉ-E|q7kUv Ax<b`&XAPhݱPn = ,PP)>F,Pac䰋. SDcK)s来$% t;khr]X/Oͧ,m{Ljy>KXL7GQTl÷-Orrj]1A6 (S(J2`U0UfVFXHVWoӫĞ_}G )Xd_q~N"R "¯5,]R}4V!.~clѐݥn,\gR*uIM U'w7L?m*z?C@=9$d]0㕊y}c2Vc2&w%sE+JQȘ`1pu=,&@+pL_pL_CY3Zoc+[9SXΐʱ"h%BPi0ȺG~ i`r%68Ƭlgu} "Iet´qHMo [Qp+8((+F9\?l:ٛ[* A._ uߩEH]SZ;ߎ׭7c om>3g+%2:9FkVH0:i ]eLYV(,KZrI5F -+2 {bpո UWzVW/wINrZ_'di+?d-:ܓٙCw3x`E-+*+ +Sػ[^=W‚.:/%uO `G U>Q(C,˗$W`,Xh=>=B!z$\Pb\P*2U+V1׊k_2ճ\0^j+;?"eR*cQtq^x{e2ZgIAp_{`Yt}&޻sݞ6DMwL}ߎ\y%OW5vWfn]EZ=':p0pٻ~f1MG&ZOl;BWo}i9N&m#=&j B`Vgoo'u{J[9}|fkiḵFZ~ԷWg\Cf#?G~qvhХW(c vMCłʇ/dLBoL}T66}ِ@,Ά4jǻV26<ؗĊHrle:e}M0q&ފ4;W@ w8 NB zLAxA /s}( 1wpט<n(7/z8m{BStէSF36uCy53OaL4NFX":ɘkWKc?&~rMy4?uA*!u W: D *~IrBCE| kMgfy5>a>\ҸJyc=Ͷ8v*)ٰu|6nzuCGZs7܆9CbW9/aĉrӥ!ԦVi@ֻ߰6F.njx9Ɠ;+|ʩ(u'iMԃh/k=ղk#c3iOEEݯ'?; ^oCHA"HA*R"H/i$HAbHA*RnST )xֈG HO0''Ot_#"Jt?'Di'DD-]1hQrOt?'Dz;+!.>f!_=&cpǩYP2g^H*5"̧lʛպo#x,Wt᪇K3|~9kFxl^Oa9C^*NJ9B ;`@&4KmpY, EiㄑJ W qlARLpHp#{@ͺHْϦ㑽,5.tum>[|6(WץrTl H.bOFF권dvF×DꛈqǷ6^G 5FYcyNBEEY0ӄ E$gz$ K"\JlВL¬1Z(m#ͨQ` 硟:qp9ֳ֝öA<Kq`Bea H*Ml!9a "~˕)X MhB^BXkZCKUXy/HT ,X`L;qvU*FdcH`hGTNJcP(QC9 v'pI6l,r^cE~Y҃Ug59BL9[(ԭhlwֹ̢o{䚶oVNhc m@ݞ`P5D넦ĊLWËObq >4M}F1m~-Cu@G0ƽ=\,12/\<%TK uV9PEȃqo}]|PⓒWdpr-ԅՖ{Evs&ա@ibBF[q 8/pOZ/v:'|#JCV;-w!}=GZ50^=.T8KDJNeY .~c7ߖOp(F*[}>XrL80 ڼ|N$~y/@wϩZЦR%9Ų2NFcX_w'Fc oIށe2cz?zg߄U\0cP\3cW M2c``^M]\ͷUvdJMl@%۫W]z񂊰:aP*I+z'sS?{WƑ /3;I}(ZڳvxF*uI~tS,݊4*:+ ;"Gpft/ͮ] Eh;rXF`o閔t\p:b'ҸmZOfm"Q!U$5y[7ovJŬA\y?]fU[Ӏ/ܚq}hu\7+|lϻ]+ْe>Mof"QMX| 7R:x98}("wgHavz4.ے Dr)XGg? V2a&t^I.Ft)ۍmW53TOWikH2ˈpӪ]Fru#ƚ[涬5e' m33Uwx?=j}r 拷hQlmf28_8kז۵]y_6溱'gniGi dHʭ# ymðh*,PcO+XVbgc__`NkuTFv`rۨ[xU u[ԽБ2pLC!t8/! mܳc!;*[~ClG O'?cןN%NN? \MeN?z x#&po=Ж[ ͍34|je#͹.bom_n>gm ܀٨vzΦM?F5l{ $0|VRHC0BGmAh^dE"/ RFl2 V3DAZ2pԶ3녹&ϭWoL Ȍ,剳2ϕ!,rZe8ʵ%6ݝ3YSHCj@ԀYzVӫĝ8?^)Pq/A''ߘבw{2.$l7&rFyD4q6kɄ3 <JHadҜL몙ïNCudEĪB`\Xh\~ЛMì4{͢zV Pm3TP-gz3/k"*D<(98 )":<= cPjѢuXK*[쇡9\Ak;ܬɩ7R믦n{'>|uKж>0f'%[)Y TGwE}~]Qț폎p F7oe4\Fn\t3(Q[~F#=۟vц>A˛ >_]pn8@dW@ &(i(>Wsö:%b6>罥B$2Lye,wVz=S52:_O4Y[~Ǯfuxz'VnRsqQLeO ͂箶'=nhm]w'sjI=5MӸ}4w~n?Ƨn/OCnY35lⓀO\Zݵ(>M|n{P^yrzY 2,^C/Pkwe1xuł^J͚ }wT(M(=eg⬧͑o?w:c`U TDt9')ߕӸ9ip<1̗Y.qO$4x)j0$) 2$Lv0߃B浩hTf\NTi,:߇B ͼnUE$ A戶N CHPyKy]w}Ip>›H@QG)b"iD jHRy;sGNjjô{aFK+yQy\5S)FEr|?-r*kKQ]A$*o1gY!y?[3џWs ufduնRDEmYXMk!l8;+& Ixi͸H2zJ{GV7v[?U^ÓX_2:ǘLjƛ9e[)`z]&gF;Jp0hWOs'0:<˞f_fa^fh|v=؀ԃY |t=tP+5QgD̄M–7mj؀p}E.x#@Δ ' %fB_JC|26ijhؽ#A{E:f %4*IJS`(>TRTdB@8˄NSm94mдAӞRZÞ@)*;8#]Żz$V| (Ab4$u:tdP!RE\`Dp "E6iV}(`مL&ۓQ/@)B0ފL)s#٦skrb|"Ɂ2̂KİSG<~7}(`{NQ$C2pkHѠ,`3KsQyWk ԡ$Cے'!TWa,u0߃B )'>x=g}Ȅ@#3ҮKbaUBʈPyɪ3/R[@˾10**tՙqCP0o|FLXtTÐ!|j02ū3r);4r ^Z #g|7*[U tEԳ/YUeRH"]:߃B u^2$!.% xAIq<5Pykj_/FcL@5UP¼oH4A@\eFK(CC(`kjkdm 0]E\IL ׇBב:lrQp!Ip)Dؾ*SyUTtbF#%phR KRh7}(`Jk3];?Oӯ} D)K`HC;EK )Βq@:z.^1hO'e !\ZdBBL"J/RLj!1nrZ8A]xx禡Z=쥴zyu?m)G=Dը'[+Z+}7c×~E=c}j?j}4>tsNRR[WT5}uހ\m;PɶġCٍ/!(O'˸GSڸ18*}N&qIk4tģ=(QcVڴTeQ%;*=*jN;Ғ[Mw)1" %ש#+NEu IrH8&IQPyC|hQ^DJ S,8)a&Pj0duQHɂbqT'1rNt0߃B 9Q/Kq@p;= ɸ Qdz4=_N`+C6kP=޹Ͽ?ޤcfF1IS,qJQVS8?qzE": ?f)mpa2j'(/E0e08}(TU\6d+ۇ9dyW6|M4ʡ ~iYZbnqpMʏC\M'Ji˔"͔"< FHV NiU0Z{[XN|ǎFWkf+gs_a}oŵ(a[ aeWcOf}>G61HzmĖ1e?ۏu5bʥkD{ F)[XۏݷߗԫLy6,3|~~QI"/Į8{%>YdQҜֆɗ߶m#+gYx ͂+=1L#H(mP@?N{8xx^=yoo8XPv0L>b>b9p0M3':;~\ _0zE$[x5{<^E[7PY'*A!rJVW שS0ra.Zj[`/4a_*9NY8 9 *RٮjKYT2O=P7GnjQ&+6S}9-5k8F2+}pPyM|,%41p^T,"k Uvu,$6H h =95dx䥫 f 57֮#IrB+KIt` su>)D##hWT9Jt\5C @xJJ1R3m.,F&;n^PyeՓE[Ri#BW\YjhQK'ULs;w'Mb: VC!TCR|.E:kk*@=Ƙiʹ҄dcK& ځNNaYwxvvp=k+pMe.(QIj{ QebU*UN*b>jPFE#館0Q6',Y&ث*[UM1a X' [acG}(`+V-MmLfRGɏ>j0/~?]yoG*H} cpX38k֪GڍjnmʁDhDS@h=Wl>2Tf =iՙd[EwVrӞK[]r{hZhÈb~<ꃠKVK?@2Ě"=@kdnp;M[u]TxGDJNcRɜRafXTqXŽ6B w]cPa8OVQ1mVЛ{%1 0zZPː灤KHpHXJF`_>I`?O?]u4HB9j:x P# r9&(c#"YHwٍTeܷ :ȀML&Te5(b 2%mL=+ЂXl`y-Dza$>˷XZXv;WӢV =wG\>M֦pf4 b6_ "lӬ bx6J- 0|x[j4a QR] }] #]nƅ0fvϩcOmd r\~7N|-o*Q--"Рq2Yì7ѓ$}fAkܻR@ӱqK ggiKA  S[ӋJ;Y~gX`8M^`9yEX1(boc9VV`)"ct~-s3;|>1ws:?[ NvtH %{ʇ(E@ȾUa8˙7id[β{x6&^֜n?؜XB4go/M=r02#lrL[[}rL6pΥEjUEe7G{$3WY^%Uj HnsG5뻆z 6e jwEʢErK~2:m |BjBhZ[om ɭqM0YMm. Qv()%Tmivx  Zqп~[Жm%jßog~,E`oB= +{J3zTv#fY^Fa8eP0Q1pyAMgrR^ bUVU {GR c4qi8pzK>ZwXjT= ~\oՋ?J?_?~zpՋzsP+@¯ @1s5jko5UN״Y ^_+>\gk(}ٟ|f~zoʹB#V$\!kP9+0l79\&0|TqZ 40%B_g.hn}J`A2!qPDIәfYp#3i/04 $јJl 3PʤF#Vaƨș.4ЀiݎdxbCOGk;1?d9A_ 0}kkW[QxzT5ZI UEl?W6?v8&nHL2, ƙARL;O΂N13ntrjYf?4P+E{Ozbx1x._S*&}7=Ms Z9TS/V"Wq ZyϚEDEtLK3 hjY)ü6,txKR.6vחIU(s;~I=ZyL}#R]FRpntiNq湲zZY;'c?_һ?Ɠ!D!e!x0k5f,`ZFL&Z ih,m{ASGwsTU`5/,ouyaQu۪V~̈́DG큦'qGYަ]KnMڃ;Ƚ 8I(Z}0 ba&Zx;9.9poS3olέGޝW/n|^Fko燔Xo90 [ygM_jh9 ڬ-Ys1ūuC ˛]89Vܧ|͇NIn.@2kY *1ds.`9&Aa,KY$d"iY"M4XH 8X<ZUua+%;%rV)G<3t9:I{ O~[+=E tl49ysͷAn ~}~ֻdnEbYEιgo+ȏ/j6{Zq"[Zfi|Q=-]Ɔ $-WO)Kf{[q/#k޲6KOE{!j"Ǽ6"Źʂ%6UDbDBc#"cF"3Ƚ*E i;=Ikɾގ\*ݰvD %ȅgld]djܳt_k֟Б̆n6`08O*T֨PӡSWk!JJ(j;OMҫ֎_ vj>Q j&I/_ I@T$"Kɥ|u[kAKac̭Jkkw!V'Bzo ߂fɐ^ʖw6IT/5i@~Y #qeqKEdHOp%WqZ2ynq2ħmYznϫe;Z VEOSQ.6wm7$/|dSK|X+T[B(]Tk>UcCO-ϋ^NE`֮ ,7Oٓ!1Ïȋ_{e̜m iIed@48d®,H ~S[8v΀MBE6CΤPBfh, nxR%;A%mb!VBr9XؒJsϢ3"-v֔?9 W"Q b3"S5Kc۝o,q dӬ"M]—,atuD F׹*%<ϲ4%T,%}lU&0S~[*2 +à z#(wRlfü1UQҴ1N+CX&<[%3 p ;1AG `JGDI0W;&xѯR0c_#.]o?o;)<;On[U~+YL4nܻƘQ0io4 drճ][oxC g;loPG"Wjn 5,e"ro/{C&H zQia{{V4?g?<< 3*cRb2EL컳>6K33aEEZ;ʵii% y XJʩ(;GR00iC i$Q:6 ʼ2ˌT踑GT[{?LNaB  PQojގ>-A7?# l_B1{7 {ahqm/wW usELuXUX;iu =}8eE%T8 y쓵)RҔS"!&",,xGI4 G:Hm`wScSC4 ,Z}eyJ}^/MV}oJOekϐ7$BwQ)_ٿ;QєIioJ~gU*WݫU!-$r"k$B魔I@$[A&ep{U'9km8 /NvuO-%)jW=3$Gc53gUׯ]uz}"g{1,~,oQ"_AVyD&6f.~s+WADK)oGWBhb@~;82|P@>s4B~_NJJ&SStMtMU{S:`y4Qj9iptZaϥG -&D3u^Աo/?Cu]Zv P JDB516X(Pʍpxy9)PNGEdQ0A[R.k%KNUWU]%묵8l PbE{MW7_>}hF;Z:q_=Z:F^F äz ҏW/<}_}Mf֟C N{a}[!W_1˵]Vqo=};ٰ5 wx μ[_ѳZNY>X[ŏ`r,'ՄK*Ljd<(XYL^jʈp b FрG +c"φ˥~D 9Nyaڪ[GqdXDịUhtTjM̨3hQ9b0XJBDL$i9tג]k 8VFmX ,&EWWyJ-TȬUjxm픲(eIDWU ^U}yrElU~t?̖ܸ;s;1P*Y+>?12o*[`Ōf^Fۜ[~5^n{#"QwF(l1K3G{ Yav! $V̅/"w|eTGcx?>GZ#U)H{/mM׸ C=X kvJPϸV`~_8A@(KդGeKe,:=MVgr{dAO2*?%_1ofJmbhɵ r B($<3cĜE NVF9]^y3f=øxmp}})~^$%f}\h/w '22p!ވ12>3 $8/ ἠ8}=-~x=ꕐsxFŰd˕H y-iǃ"{8hF8\TX 5q=ʭxXHXȊp) I`Sڏ%va}t,#ՙ]lu Fv~=|t_`?g_Z$AKVd sNB 'IO8 ӱqWyG?=5䶧<"/uãЁ= dB#L)(]4tE|7%PJK]tb ݐK}*Eʴ|s:QQ בIp< )W Voe{Ituu9_Va$RIE{y5d9It0^B" H=Su<\7w僷Upf0 ̭ACT/׶b(Rz7ˍo?bqaC'Q[{vRm7UbypQz(ael䤔$z@ u|a#c$u7+E_ިp<9RERM)?uz?\~~ӻko@|+0˓_^=7t?oݽkwMۥkߥ_k}>\-ھ[֜(}˛?u)Zv<'9R~] 6זǕM8]p>?B`:py!T16H+Y{xW13֯E:nC%l\0eR;=N 3$;%'JN;ud~>g9=G 9{B'T$H1 - gk(еBcp* 1*rhpsB5ℚxO[54F #hGN%w2~w$hE1^/uEP*^x^ZxkV&`.4VRm2{nۡtaRJIW ߹<`,5 9ZRQԷ5GRϷͅ?/V(aaA(io}#w(N.kȬ\=o/ڑJD->YW3- HqEEdžb7̾.T#h nt4!*d5D@PΫdLdLVP@ˡ=(yɟDLG eZL*X$"﵌FMFSXbCTU"4g?4wzCHGV |~av׷7}m^mkTft)woz4mJ%-<ȍCug=67Z0 ńu_O_gכN4fE׾ݼyeĖԺ{Iˏ7^fݡʗ!Yoovsrλhw<\Pˬ+D2 ѐ ҜOQ5uWK7cj>wZx3y 8WN?`!>on}7]DaeL6 Ƙcl24飽YEH(򙖜eN)bxЄ#ҧtAsG]7%L,ԛypvi*+bQFQAA*;0fkEiPYm,6Ϫ; :p~-xu$3# L^{B۽ye"X8-<΍Vy++wR+o'<'YyOCtIDy{̏ 7G2$5I閰z88@ҒP:'roD5Aǵbۼ/9_󽢭*q(7WT(֢TH(#x+8>jK ^@5 j0DFxvGA d\?~ԕ7/~AfA:΀"\` >OP* %^F@HHL898V+ȅ4*Tu4H`&$ڐa"c}9@AF?1t<+10B)*3uS/N Y$53n6c5Fˎ:N8퐜j;S4 gˢ"79x;7`^1K˿S^jF-Bg̘1YfYw"eՆ<'Ʋ&{N39;.D-<.~v!uX=߶1{x c['ݰ?kQVn#Jt)>Ëjޢ g6j$Bh4y | a(\p1"2g̥"ּu(joFI*N/LMiQN+4!@g·Ѝԇ xuV)M0E q%\`J-`:P(ZNY>X"m2et 05:-] gE`+MW߬#6Y6.xu_WRZuۄUvsEz I/I rܧw'z)E}Z ZW0oE]Cr%H۰6 <+_ tgY/j1f l}d'ZtKn9 ^w轠M9.c[;?EXtzj"O3%4hq/~ΤWgq2}N߃BEZ)㶽UMnJGv}^m3 {fI1;M{9A4Z*W =|Bh1k.?O8)LT8hLTC 䘢>׎"jBMr5LG kPha2X1c^ˈiDk45[!-Ձ_׀ͧ7*sF3Ӕ6zrSHz3" a#'~w{eA)R4͛OETrgwh4:JRJMV? w1e'\$v1vx{󋸱ٟh8y'fN˛_+׷\zs??J/sb=e5 /˰oiicWo;cyL_W\}|F9RS:}z<^+DJ{FX5F֧j\yBr";呩ǻV?ꔪ_*j(F@~}v!u@VAnypR[iLEiU(b3Pč4*[vu+¥`cBLJ;C0Rʃp-A[/5^#Zvp:|~4nRݖX"y$Bg kikF@%^r7b[u;"E RrHd 3âҌ;$$" (O 7}< 2&k=c'X1S`)"'tr!3+;9a: C/VÀP膴\Sapj=\;8O\F4|5u~cQTxN?lbdI%wߖ;jrZ zVpv]55 &Mנ+.A+(Up[sZ-^OnFpkepc0 ̕<rA[7-9F\ϋחf&0cU$4=]UuC*m2Aa4.(`bfE٦`grR^ jUVU 9Z^g"d$ú}6 %)>xqqgK*Q<뢱h8s%7~~{?I?߾:w?z8 7@ywR9zu_Ҽk[uMk5msK7W9~W~ vֺ Jg\~5? f.¸y=O:thM tz3 Ӊ[6QoۣoV]b Q!oi\E)ҝF&l NiuFf:y\O8K󪃱o#2IVewL7ޅ$=h)7{evZ`(j^c)۟27:@&9l~8ORFg dYH38C?f^e,{'ئ41YfC]mHpR`IH'Ec86OC`RGSJ%>imw1WG{0HXĕv&\{}VU*UxjYuS}.n7gLS2%f^_,b&NI옲t4^;l 5wI n!++ ѤCۖjcƶY3O! #5>7x==r<Z~s2) XJ;SKr{0o:QF۞Gz Wyıf2 &g:\ZWDt'^e ϲk;:Dw%mXCk-Ҥm-iݯ<ڠeDN 5eŶ-gњr18ϧaqKE>ג)E 08C$:{+-$[i/$O5qvq".1b. sWeQR-6d8eXxÓdfE8I6aUKX_GFO4]7BB-lq3? =^]3T)VuP5%5z+b**(i6@>7(33vy/ȇ ![^.cV \ Q=|i-e-dpiYT2gA\Qro$HY.Se-TG]}RSևT٨iyBC -]P 943/".z\LE !I%m/Tj4vCP H1WOp >xu%q2͑_yv$jBDZchf% SMsL 3MRDr|T^f˻R 3@!4BJK +Dz"~Ǖp)uxuzzCKeV}mF# V25 D1 \?(,Wǚk1' ASqYt1Z"K5 nc@;)EL/cG$c)19R'hY{#9Ի3ɝۖ;P J6+%? )8=Tx4I3pY'3L^O0T>HRzꪞ穮ykRK$ʀQ\oHC5C4qwω"rtshGgђ1R05l@~ΤHgq8L;X *.)k kEME,MZc*O: \1r7Bc#+֞F p1-_=V*KX9@og MA_@`gE+[7 {RYѭ4OB~?Y1^Dג|<j*'K)JCR ר]QQ7_> KKB9UaaR"J5ٷJURMGusdrdM(=XFCB)YlGm7^`A ƜR؊})ʡud:Nwia-SM& O)kC{U42aD%} yIkmH_?d"bC cdHʶr~ GġHciꮮ^F@HHLBDN6tCʾ \HLc@YGa9AQN p2Gd1mH؃-N:{A2F(]"pFBb%#1 )DXҠ8KFb6daӲN:IۧA(ROzg~u_M}[ %xn)x-x]cJdŵ21"YRHbDzHw}yrĮ.^ey%?n;-'Y^ү *i"5.rӗʼ&kbMӏJ;,9G@ rH# be}0VPH0<)cr1I}}p nϪvfyU1FbYI-mwnܦ/ lMHJ*:|v >CL <|̨4'{f>J+OJ[m3ˆ{RW5l7D,;6~;-{L!1ӂMTܭ2yvRXIN:TURt~nEiw?mi㤅ѯ@._Fހc NG!r}L#DK?S`%)5d}cyu =z}S'Ʉw0 ЦRM r/hs9yR>~hh$Yjn^v.ڨEQcZ">O-|֢}:EWעmZfS+K qKH oQ.>rTH@ ^ `׽hV4#wp&\ǘD떌xܺj {wZ ϴ@ T )?k▱`.ٕ(,44ft4 UHڼM?cYNrIrgz(]?AorwWknT^>7O=- ? `zv9#Xw =M7۩^2Zw/y/~nUk~7+[ffvoR,`S>yYn53O>A=/UwwཽͿ~o/*o4W4[ܓի zh8^1#t,DZOR6lepJH)w{ wB?}! Zb'J*U{$_x\"LNd,i"U搂:hIrJ"`0pZuOsčaG=/ڍhfNh#0CIPXF.hB k&*M ; h* W,;: W zI sWRHj% ,ێ$Qs "Dcz6DhM12He.c"w k,cY\ܛp3fcW$KD q*2-x ,? t "D:(,S~4 :`@ 2K$*;@'jn*,L">%jԴcP'@rHTN_"QB`U# ?Jp&ByBGQ[av"*CD)Dw4h+,%ybj4 0`I*.p8"j7 ڗPN[pޙ;ft @.chԆs=,MG{.=& bi.UѻђnY+Oa>%!q*Z^ N08(^IF툈&OkY ^"W3/t*aKaShz71q7ǘ,74*!rf]n2qK=oȽCIM +akx[)X)-0Y9:+J2";iNyR6=t>'o9tw(('>ќsyl5$mF^W5 x: W= `b"F%PCCrzCDž<#aiTb&y$Qc0,<a8z2瀄5y#4x\>G ۟4m#HtԮ X=LM |ya,F7s?~E+={bHt2vD%<(0 IzVJ0Lƅo/̴{-Ifiyz\cjz2{bYҭ*D01:RLݙNÍܝMgo@g_%P0 8+:mq: R h.zBQt.XEa8+6WŇ9u~|tp\Tʾ~+5f|^ vV8 Yji\LK?oV\0J[CޚֽyuO~]\dn̗0Wm^жQ66)ջk3woB # Gb|H7MÐa$ԙ=QɒŘl⤔~jM6U :MbC%#i`$߳tި6@NNĿq߿o/o?^%p廿|[: ̊ZS;O& wxCk0^54Uli. ͸%7P|Z[mk.@~rן]UG'][NVsq_g端AouR<<+-: 4 д/\GZ:{TCevMK_SǍb(rVͭX ,KAKLjǹFewa%)9QtNsT9Ec!98x>$јJl 3PJ18b_9s4BrՔ1h1OA[ni>].I{a(Lt]V.NzЉ']5#moEm[=AƕvWB[_e(1xf_eV0b՜]b"OkmsPͪWJl"a\mn ,pI s ?d! hc52ak-G%DVqR:ĬW逰 H쬆I!wE%<83XJ1VٖqRk٢ԒB:_+޾&tʔ>sU>wƉ|[K]CcHZGG~8=_Hڷ$|񋟵NEtLK3 x_Y)ü6,thQO-e?7+Ge";\G(J!t1 K4犂12 HwhxwDIr 8M(rnUґaYqGkZ HZFMg3: /&N繡?ڭk4O`p><Ĥ:\eWxM.^ ^6÷5߾n|)vX ^yѸrUϵ!oz4f[h[d-2[֢V&ZpΚF%6ڶg>7NG?f/?xo!0OBUO!fo]Fy@w-7b*WEwoסӥBmB7%{ð7v}Cp{69sW_?G itɇ>+J-霡jxQfjf^̆ :jhPcC]16y]; ՝ul:aumvu$:&^{H\^޼*$ j*e?='qӠ4Ф;YOK`R5c~:ۏѶGT7#5jxTq8:*|NJkIog=T/xnMքh-JER> N9ťoW12p TDVdN{A ۴/vzJ-YwSWޯZMAfA:t@}hd0'ZK(Iz yI"R:>NDN6tCʾ \HLc@YGa9AQN )[$2Gd1mH؃-N:{A2F(e"pFBb%#1 )DXҠ8KېMN:I$mvh1$ kk.;t36"1cBgegqLbH@_yn u̹1}iQ~5n FWWI!5e].f>L_*1̮}o7M?*] rGL0bQG"`"`fM ` Xy$R"H;/ +Lݤ?URZ1U\ns8sySetrsr%Ic_fؼ/ COg{n,0=y hk#K%-fY֑bA": Yb}U,VecJ`f'SV81R )  S"{}]MQU@VE?ū݊Rw{~rw6I >g𹙟0\\Nw5Y sŲ"WG*ccһj.NQۈ&5lγ NdGڶ=.,VNi[qםyiV[/K4l"~n HqfD\xgfc{Kǭ;Ny&buIcxZh; -:6i7XRFS6%J,wL+8y[ ]f+7סvq@yS(jTO'9N:~ry% $@3]tp]&$Qj@`,0dZ[mQ)fH={xLn 15J;g~R؋UCt{qcK? G T+fE8{%r4 uhIހc6=~rh:''T:*xw~-y$ؤBv cIHD=)ܡ%ȸ̊h\%9)F]" "3҈I{qA9g9!ZˀGZ Ihg쐶7*ɱn^|qĮ("4Mj -.ݪbChܙIV7y4S00\: Y *&2g:C,e92֔ EP'$: AÐ99TDz9;F? URNGRIhBB:I@J$+$cOm.4| mC֏s-tD97%3 l9L_1I5&OV6'lӢr5aft?UNIDVI$,ReഈFJ*ǜ_YJz i'22vHX\流̛mu ʔ](VeH)C ߹)A>(25 ۨM*~^rY\rfVNs~2[Vi i zR.~|)t0o7i( RF ^juB^t=lrx)yXk\lI謶H9)57ԴI,7I퉷Wu2ߜPyDbs~oD~h|} |țTVw0:QH"\p^z&U-Ay97Ux;z{ ;L\-M]o7P#f ͑id{opqQ.xۨ:jӍV6ʳD2}YPZ5oÅDiL^X(9)G<}L6euVY+QwژE #X/t2+w NggWAO>5D6~po'hoʯ@H@OswIy{+'Asf3EC:8 5֘ +SxKR/k%ix문ʗXZXGcɿ1Oմ`hu7M%N<\9q4:εs{[t.!(lFύ GI{+gp^- 'J퐼eG2)"zӣaqE_ h6R9e/2* 7UnTd \R"6{S1OՇsw3`Kxʀ%hmN2)G\R IsȺ1A:g̩軉~qo YN:@b h/Z,ϛGO%x|lˑ;h ^+ԫPB *ԫPB *;"Գ  5;CP3 5;CEZ{@ZN5;C[5;CP3 5;CP3 5;C` LPʪjJJbT@ВrF§)0<3ǔ LyƆnK-e‘vUp4djBgA|5_otMדLW`OGì&Y9Ge)N^B,9Ƣ3O:E{dr9p4Y5ciĒYN6 (2$^Yp@DF9ˆ#IBz'tRqMVդ oͲ$w*h-;#7MHŝ.kl1L荨?hy-cH~SwM "3 ]bѨ$2s1aGH11ϊmŵ6b.K01  rUNAt֔ѓ P$($Vy01% Ehw{la݅5{KРX)+4!aCv{%H.bFRUe~+J5"HFhPzdFjR (Ape2CLj4O0Q1W[Aj,ywFXw#sl}dzaX qsx}^?M]-ˆdWc\lGyov?,4*0C3_O].'S&KJ8ב'zbL׈T:疌b3.8aFKe<^hL9]3{6i5CrttcN0,'Wg{Z^0.?.Ϲ~X_>a:gc⩻g7~VVo&;?\o."Lu`ǝZѐM ajt&Gs>6/nFu/?ݍf_[gkѠ傘k7tm{X%G෻v+?X,n?*ZFZGR~aD0-, & '{`YŢɇ|Gc{ 3ƴ*QleIbQ9XϊG$>V:lCxgwsE헟?}?ˇ~qw}_~"'Z1rb2 x2mjho>je-..2S0ĠŅn~ 7vMɿ09)NZ^]e4/D6Z.'>qńR BN-xh \NZR>G-mhRAXlN5ÜiyۙtZ6yc `YAz@J&"sq/[?u8+Q?9Fx!ATD!!kJ,2.$^nrz"(; k|5DԿqdi=* fP8.Q)Hm A hEEl%m%FPBp|wBm['9F؎n'S(vGGQ'8oZt Zkq⭔U 0.6EJLCU㼵R @ Z4^YI(0*Q!)H$ctIF!1o!#s^h'X<29qғ[o#˃m15h@^>{h^mM|[cEGGBGZK!;GGi0|Kw.uBEbF75S֫ tԻGyǢ{tȲ:3&-47J9+Ip*F/%LlRC#߄~<ڡ<,o25tϡz!)HX#w}z|1|OhωQ7*{8alEe%֌5UNMsr* JDi\01+3b:`=*- Fek:4VJFl͏ўxG88e:dhlR!;$$̞HВ>c\f stVgZX9.,d i_ G4pnKB]jžl4W7t{A]6N4jo'ʩ&>; T 2&^))/+*or >jK ^[ j鴉~6,{1YvԮCg?ZݥdFO#Z= v8uE#&o>/zZQ1Qٗ̈́R\5xڅMgV6^nι_Rjf1^_".4aTNS.>߄zJz7ŇVc5)56_szhb׷zux /1)(0 >vjJ} =ckFJߞz~P.lձя;'uOW$:S2Q_|ܣøӭ/FmT| /_C-E=ӮC7$qa}\2ޫ.^E{%2=3fQT\H!VBYI4F`)GlVmaMs%j/~G9]g9/n?f}KFHYFZ4!Bq4(VjeReDHtȬ(oރU3*&|㝦[ެٿ9fx t~/8%N/ #?L8m<6 WRY*5x.ΓGu>=_A~fP࠭+k22> RTc#̣,|VLȧG]}?ܥ3g`n:{2ZTvb^rÜ%QP#[ZcI"!t;UNJcNbIArnKfE(eMSoIB"c 'vt T~zJqTQ( "8Z)"V"4~i;/˫9a0kqMᖳ/~eF{PьF0XN`QS:Qn$w_ci"#( 1e׆ ut_3' ?U_G(J!t1  @R9gFhrށh']+tB6Ԥlϱ_* >¡`z/9Dy/1tR^RDɢ1&R)'cI%rɨs-bKky %R%\ #R7쨇^hTR`B<UX-4Yk5f,`ZFL&Z ih^֝KFk{(:7Ôd5xQ+&OmMNP3Tz -6n얦+{jvzkt+3R'7vfI+&]Lo3ń?!]wȞ|۰un4Z̤;7Cl ^. 4hR̍:9Ds2ɭ ͨAV`ޕ>}cϏQ^]s[OS0%Z%!0;qJ6>j9lC~৫P!cFY|iU"bK(F[ :aFhNPv)>KS;Y6jwfꓟXm;6DOzR7Ex2/=h$hk))RR;Pŗ%t HJTM.'*SpM]NyeנG|ز,]UUxa^f%i{ՆNj E4p&"4q61 DbiT]?$2瀄: ov0zb@h$!e?z.KL.$C\!TF 0 r9&(qAX鈈YUp, JwgYuCѷSG\%f8>tI>7)"rEs*$< 1 $wYMIF)3*Uxr\bOWy{\c{Œ3_#0 B#L)(](h e겹6 dE-6 iaا մ &½?^Ucq:;0E/|"uWwj!if fɫj%ſ@0`gHn+'І؊۔G9sc0=+m#I_f)}ng{`l] yS$| od")xHNlV1q?'O~~p 4F 捹;{?-3 .~ Wmo.n1nIXSγjhc5j5L{z f)杼~ zsoRV7:dS }qI济E͑\ksѷ#XRO/K{X.T JЉ:wAﯯ_]뗯^___xu痰 $0|]46݄GUW^mU ͪfR5kSRl ޟ>B/ ڎb/{紑_7L PKŮ9:Bf_f4|EԭjQ7^B4`sW6`-8xh#E'{܊b P?%oмng` bω; Y[sF5^}aЎ\S?Bҷm 9$`+.D%gc"b'( S -FݱOGf3srx2.6Fb&P?1bxl&90Gn7HOܜW>Gۂ,Y呓V.=4G6٣ n#~X>̭cAG'BGcOb==Br0h<tI&(CF8%8ZSOr+ Hz-xZ] e7AY7Q=SqT7lR RN*גWJ_`aLh QBȒHȅ) qez= &d $ދqvPna>PtQ˖[t%5ƒ8Gn6ԧqoH yP;z>w-9Hxp9 JTP9u*r<>ڥp aJ}&fBI K`skHd"锈 &R0cb)c=!5H4_4Iws=E ѝr2!;|9 9Ib)f;.~δ:}v9حMˆ1L ]U3g!!ݞx߂U;䬣'ǎ뽬},F۪GjW4q4Hq'E!b {J) 2&gF0p9`$NdIr.<Ş8b>Q @e`ZsU# 7vic4sHp[_;QM@A C.05h)=M ?I"Q 0H YX#Z) :>gNm Ꝫ-#D֔*'"p"4 0F^`"Q:f2HpUH9u,Hə*.Be2j*4Z"MF5gԊCg= @܃ vqڈݮ;=u,K: MwƎ C;`Vy& 6 sdos쌀h X{Iu{HސF4Hc,qjk$N(^2 *yl[5g ^<ԔCxί#pkʥk vE,OcY a[NwBjd{?GC5.N_TƉj2{Hbp0.ƅøpa\8 q0.4%øp/ƅøpa\8 q0.ƅøpa\8 q0.F]0.ƅøpa\\[oňDKVKC԰.OcTWכ/Wu~Z3޿u;>pTG5>(l:9yNs<`#g9]Fɰ *[&`u`.MT`cv-u _j㢂%dM[d۷d; SEq]goQiN {WxofuO-EOw݋unvUg;7rcF}!> h`3\t.]@3O57ޮF7썶:gKpmf] M4݋${%/^^x^%qCKonևƩΕ>^gs>o)xhpKG:WYg@1YsL7}~,\աNdD9@c:p߫>ݿ'rEx@eu"*|77կ>6Q5޸8JOBY-j4'_0޾n~ 5&9fs5{G_L^TwMßz;+Reu^T>V%r0|֋* a_նN/"΄ہE#rBGmmY(k׶_lmv)Ϯwcz6A."E%( 'A bTqĖ/ko>Դ\D[!Cm6lg< \f5/TW={v;UR5wfOk;>]& b=VpjĈ5pa"$N-ԋqvI-:+=Vv3\2CGTQ[{|WS0S A?F V6v{oQ)MW{9yhsfhے=SsS v\'>f O-^7kHEmMj]#wpОis̤^1rض1nYT6m w֗XNms]ߥt"ZU"kYnpgK%.?}`\W62k2F9kcruq;GOϜكzb:w S<p4R#˗DG#q` rZZn Yjt|HDKH4W`Ia}\JJǛ]UBN|`\tftR}JN*'uC1P“E (lK|3ͽ(s96r΁m) dkMr+ T'g<}0bi/rur~@%y(3Ý܀u|u4IEAXuyէqk7OX'!|wկi'ckD۾.w^Ɋٹ1߈Ov1j MjKjmNv[iT'TB0y 6cxyK\R'!F*avBS3gw(&(xUxYqT:p @Y,Y샃y5gĥ!";dħ+Αj< X1Eia8nl.ÚKA6JZm9fn9SG1k5)l vrÝɀ$̰nY~7pC)T CLo>)3|fJY ! v\3 sc"O녖>rg\PW *Rgbmp qA!)1HH4H,sp/- :dG`tv b]ȥ\jv}Oco7f zL rD`G7A0-d85>K-L%tD9D1%b[Yz^0g 1_`.?B[5k{W6}. ^py `: ꜻb4"6a֜IT%Δt S Β 3NAs\m4WR*b hL2>:[ʉRFiE Iy8%&5x¦U͹s"{7Ko1՞RNUBoMoϥz߻-VOE)9<;Yo9׊]͇{DH~BCGӪiDY 7Q=S'\ee`7?~Χ}5yWz7~e.1g_F7B#4ZhfQ_կFz!Rg:{ĔzmKtn^?ݙ:QxMy̷󗷯_"@E,"1'd^)scS~DeCވx%,ݻ- [IҝXbo=rs_gUa]LE\ /L).osRRwfR9e w{W'-l=YyLJën뭗[,! &o,wK}|}\>}d;<-09Ad"{W)ᄈ*1E DPJ]h;8lfH2 TdR8n({l42Zdy+.0q'C0ET0.@d4o%{Vop77YBK?2))}GQ{Z J*'AVFR&r zK\\"w; ,Zo9$AvK(|K(PHR3o☑4:j)59T0 *Ia4B f%/a =1`ѺsX.\Lh ƸlJ;IaoF(FZ'XyG$)8P_H-[ !X\s(@)u&y$J|60b́^͍MbRsM㋧>9_=;DHJVRH^9EX!E": 4( qRNwۊg/AwḞMI8p>钑ic~%-4i>#;;gCu*v:/Yz&Zqg<{΃C nfK\{EZ[~b ݺE>ԓ:{o#%(+2yC5cc.U׳H pZ`iGRep 8<$LGfL*U ĵ ~ ^^GijҸlk^e*4qA1EU9"=Evq07x 0god\BQS製zEUVj8PBKk S'RȣRFu: My4$h ^nPWQ9,&dK+7e/{a؛b4?OPz\>ozճsa87xaXқ#D4'!{!Q0i"+QĠ^ń <|8;z5޽(xw[zy=;Ǎ4ᣱ&A Q/Vz#0^;@A^kw0| TջζCZ{BZBH)[GZ3̢i]y`&2Vh xc5]9S9.eYJx9%~z4;XVC >zѪ6H0LA &O/٬ %;N'ڥ~%.32F"Npol8g<Si5eGmiDtk0d.s=J*Y`#]+|EtR]_=?n4>_3brE+t[nBB4Q@I"DlAH(g:[ky&{t! v EWPdPg3g.i-yN.3L6T82X)ըP@q5@ uA X Tf\%o>{SVoeD9=?P/Hb() [, zcr1%:ߋgNڟ5mHʄ̵ ܡ0R.$5Du`*EwvԮ"=^gxksI9lB4=jiwD`}.YG!ВKwCnar|`&W2R=4~YT'}o)_LI*` oNF ]38<.h2E#xq[f5(Y)XO]  ̇qAfC?RÒR~:\X֖wJCR٧~%:99^\/~b /gZast!0>R~߻5ujc]oي(|0KjmYۅ"oWr͹wׂ= nXc7hYwF U̞|4\zrg?]39ucu6ɦ^{sHXm4.i&hR/#UwX!knP&nV{y?~}wӛ?_?;E?}7~_pqTCv[ N][V߬kn&]6g]/&6]ߚz[K}﷫/?$dʛY':x͊q^j~1[򨤐7O!\҄M%kCcn-Nu1[S(%\tʛfc8;y'P]Ӧ q2v$1+#1ˠ-q~&d8M3 `@6HiGs,)!ԒwGf(U!xĈf)Ϭ=WA F2k%0 bIRCV`k2񨺃;ͺ=*GQZ1ji9jo%5f̀_үPHy6Mmdk rQAd_^ ));R3!H'F 5g!\Wr_$DY"2ᜎ[* " ᤹P!_WR4SVhgtqt*Q(v@mUziR)i6XR (l6ǤLg Lf"9x'SAuezYm8:,dw==M^G=3 HTi|j=\?CF9 7}#)E4ﳐN‡w4 ky6v wmԙ}!B.>_8 pflMKkh{tO6{@al;Ė[n}kwyPRH}YO|W C]{:2 oc<^)Z_6=#lew`f"B:~Yn6?mUct,|'/俞]3 H ZJ8'Lܒ<HwxJw_IT?_K=wVyXbP|lr:H/Hk8MEpmB)XԎߘee|n N\d%˂5QԳP'EKyب:y$]lRZj)ed6kxl;0H)(g$ZUr\zըG^!e$ ?{ʖqGdR0H1>RFD~v-;_g83I@gDzrCI@ *sRڨuűaU_ @(U=ٻ7ˇ">=ҀfZ~-%׹&"sB_ؼζo%]>g8=@]i ƳeۙA|x;L,K{6=M)x(5`dYMEҤLA(-ẃ1+Zy( d(xȫno~ߕ<)xX 쮧mO?}=虔\3ru [ــy>`nX ==3(N ՘~0&kbY҇jj}JMa3! F ]PWr$6\f}-B+toǴ환)#SG{޶y;].dAlyH]x2g b= (zXRe҅8/.کK֔uD vgW7sKwktp@6,jNJ'F./w*JoVJS]t0p&,vLPm)'(!f*^9psLK*Rmm|[WvB꣺ey Kx߿wngri^~c?t_J Up+P"!2@7Qfxjqꚾ*uIꂲDJY"2>= 2d'HIdO#])Y"Pyd1JF#H0<+9HA1P; &̈́`|%28%#9*qVCAR.)H9=PkT! : ДY GeuL}[K!oa5ů/ŏز`3E6d <=Եڿ8 :6pxWtL1ǡEKu}ἋTiMYA`'}}}u3ԡyBQ@V,jNJ'F./;-w*JoVJS]t0p,!1A#!f^9psL4=jkSaԥM/sovoX us?Ƹ'}G1Uۍǧ?WpW)ˌ :IVxmh4|RP^󽽠R;RյP˹FE@W9;L/%O%flAE B2IWGs=p>t{19ͿFgⅇqfm3wˇzgJwKMfUpm* sY*De .6.bDoRP!1'!MNEoGm6'ό\E-<*GkW5ʳ'OOW~`ujkݦXsͲ LKMWb "):*GA00s%$ILR dJ6 Lb`VvĔ2K]XkÝ$}[_4Z^- @JXιPSV:ʴY#2|\o~:fێp|{nV3~a.C>-fuf_77o?<}MFG=eҷ>vR,n++u ,y:gBzY7+Fg\,p ߨyRv47)jܧ@P ܥQ^T.S"U&C&g\JݷwoSW*5ĂZ".QY!U ΁_1[ Nf qlhDYv߀5ű5Eμ(ƴĴ xjowϯPl@M"D&/H(7*9l}l9}3_%G/Y}d^;-Z;r6脓T謶SY'̘Rq hf۶-Iq.#LD~+cԇSuJu/o9}TN,<.8/<Kx}稼iގ}mw7&7oXf myi웕`<+gJ%M}2:MHO >pB3: ]jbSY5V=UY ai!ZPZ0_)6f2)oV%FB)ƅP> MJsY2eRid9km<?U&FՆmJKbH,{K;o +I$t$v%بϙ4NFA4I" Ey@C&Dz ԨΑL;&Rl&r5A锲¸Ȓ1ziaT8Dcu~. D MHoQVXَlqSQo{Ƕp^첽'bb0~y%ۮ?U^F}BGMd8n,FM"̡%sdX ADaLiZ+JhUޣ,`QڪҤS*sKm̥A2QlIDsNx6pL=Fi ~R?/G)\ڢ> 잮;-|xr"?僝 ۝zٸ&Zjw6eT;w=6}ZcoyunFyϮ=w6)vŰط4l?ukM]>SXHy5˭-wR"y}YEO$j FHj)^0U (SELHxF4#S3v$#ScoBT L;eo!̔u*w8䱙 .*oK, MGP]饂^z ȕj9,\._>gKv?o˻*[qjcΩi*hj޵q$BelF[ŀ7 `YB_%)R!)z89)y X6Ꞻ|U]]0)ib44,(w?H\U 9GqSXpNNّB:]1e|ytZOx=Cdƭ mCtE8D54BEo#]sis-'`CIf{<FQ#e..N _# Qh]P`H rKl'gut/д77C_~]|kB͏v&y-wߠ=rCyD4x%KSCYv&nQ'A  ߻%:;;\ϔL)́fJxT&mZQP"w:JYsOcP~;Ѓhª-۰JX YHB Hd" Vo`(>EBBHV1&EGam߷XiAz!rvH$r`(# Dg2wĴԬ wxؾC;7?{!B ?OЎ$\)Ew_raLqQt~HU Y)ߔatm}U:yq?2=%P2y`\E'cgə8" ;&gJrN?Qr۳Y왢n$ZqzB"B7bOJep e0q(Ǚ|J6!2RS\RT+zz^^GǫKHFDg~pWM l>ameδϡT9B}ܙSuwgr{32\7__V.H|F̕^rv!Pf\l|{ikAHH@HVtU7 R01Qy%{Ӆ^uk&N'Y7j\%B٨2d>?f+urD%=ddC%Ŀ&'_w|/w??9rs{{7hu+0>F%M?#@A~C8th6C&g]-6&\3}=th~f[Kco]wsɧir'o:ͭd, 0$󎏯QRe@-Pb7 P~wEctM<0ݣ2fp˛NЬ ;S0zm ~^1C!~<12`Ў. #۩A$р jI%UPZ<jb;0l=vtdRԿb:q*1"3G\ KMFj|P=sdN <>PlBzO[ 4w#d˰{(GT׭OצZ-M:ZmQ e1/]i׍/.Ʃ>K a+|j@(IBCT@[%q 13IM`H%b$x3*Bs@Нgpǘn<-RK>Ha;g1C8v@#3×V>bFآ##G@Y@HxQ׽&@ P" I(Qx'Ä$=f dZFɣ&v{|%G,|=5TMB*%gyiE\A~ת@PQy:o[uM=C;^Ext۟:xrk@߁կDuƐc T{y3%yW\@ B?ZЁS[_߽~A]"@1WOHLFjUN, ^iJ $P#)-T*(e^fYd%W_Ip\}g?gݮ@ f{eE?*}2'{j 2~ŗӋwK/a/k7tlVHx,DҪQBsN`Hzcďɕ+*W UByvEIT: xi%t$%Td)*ĽB^&ф@(+4(+LZZ( _hĂSDBtP`sbi~*!n7, P$@ɏIruEwcIfs+Eɧ4~$ OnhЕba8Q䧞}\fAmH}]H,QS쿴!oplжeڔ:O(γ(1a7Ww3ZXGf }'XrSjS5 \Ù$Nkx:9~U <(9#em1{'jeKRApjT%XʂZD%UqS77z9-W_ 0viWT^8=fArI U97‡ TL$H>. 5A)EӆRi1xΒzքd Ȳ"r-!uǭ~Z)Zxda-Lf>tie[\!`y<\[ GV(Y(Ve廆^MlJ7 4ly@dCֶ e#b4rB୦[%U15Mc8L )ID]Z\#zX0#P-5U椸 hh3sNz\jYx`ԻiOUwmf IC=a;᭞RF6ѓl{v@Ss\ X>,TZ0֙&IvE_Mmh{S@yV(j8Nx 4򴋘X5 ZX%$&oJ ,[Nr"J)|(#]bO͟R/x.i6:nνZs)mRA FֽhЋSϳ*/U'- Nq"| C"j)") 5g ybNE?t6:m "BXC#w&QSB8CnV<-Mk947ρI j:yCb5d;ߟJSjna[w$#4Gp`X Csa)8@! cC*oSL»g2췞`kad WgY%JCfVJc<A{vI)DVRr@mDE8)'Ӣp ԃK50)yS}ޠOuo5Co Do)Ē8c%\k*J'M\Vx =lPnM-]+;({߶͑9\$yڇezgTKׇ od慰:V? z5x5^~4{Œ\A($'$@mXR,ާ$!!**H->(FtJʘ$L"I!3⩣FksN_>rMhhY BڱQNJ|?p̓だ,j#B#AUmh*g`Z|KDɈ$QZS0HQ #Z)[6T?bMAcx F"^ђ`"U:21rQxrG&EJO q&W]{~Rjʸ>ձn<ʇތ/Rۜn)//ˆ4`4U}Q l"cY9Q™mTJ#J1ȅрښ䐒=)Zc%(T۰7Ff ^q)1O^_Vݪ&hb)G{ݬ)&oֳs@q.Ð@QRcVA`,٧.xc90\z\eNOUC0Vvcv`[BVA>^09^+ZUJ(}hbs!,DDeAA:wct!(w8wS44q=pst%s$IG IMN%ie)g^XӞ$!gY\x31I] Y,Ke8% Jd[%{Uh3r\Y zϕAõe$ۻhyMTx{kiʖiY?crhr%Z$f̒UYS(,O: Q ۀ b W$A+j\02&'a kNXl#,+ěrn##uMuF*a}{iSVʺggUNhT,k(Ō@O,SJ@/R!r. aADt/Q$ӎH # I-]c md;ԭ"ݦ@ngsVADh-Fq~=Jƅ#x{ūYZo՜b͈IoBKsbHKU!_kf'\OS冼o=7ȓL&kk SG!)&8㒉 v)LGa#@7MfrnAnCIߞT)ᤒϹy)BoHLwtݻ!q }6k^cG 7uQ-t9oд\,vI]W…J5ژfo4#luLj<>x 6c b ޞNy++"ǫIS i.{Fg׭=)nDk70Kݬ k(bᨱQ`bedଵ zmaRu\W4h#c 2զư_W3OKرڨc[5L {ipq qӯ?>yW/ON|}?y틓ׯ P5ZIw&ᗻ`@']׋ڳFSt-krKv~vQ )~|S=M~=߬EלQ] ly-4ko7J/g!Ƅ8`˺9[1HhJ;.*J{4Vc8_yg,}𓶅? )YN̜:Y/Қ FoAJ%:fp#T{OZg^+V{+;ΆCP#soTǘ!y7, %I)Gi)V{$:ZPBID+VIR"T'PdV7AwL7-ڻU.E=V֭V7]ZmVJ1&ݹA_uZ.J[V૾Ԁb(&+88T}تBiH*"⇱Y5C8/Q{5fU?F˳[ԫ _bCYm\x{(Ɠ?wwБfҺQG&k"w=ҔD,5O)TAS{@0hb)6BíR^k' 8IR3儘1ŜB &;[f ¿-p/=_zp X/:| iҏWgxtb~^&kBɀ$W%sɰCCiCiZ1Y1Z[:F$.11'n` xN% DFᘋ{REa9{[,hlcx9{n>?BYpԲ}f80\׻+}7Pu4Z-&khdez؝6+MϼM.&Mq,=ۛȭkAfչ*tm Jxfۛ;.qtsiof&7|ywJdW.:c{{۷-Ǽ̏e]y 5YqBƍQo)#Fق/v~JEkNCSKs%r;kf)esʕOׯ[f(J?8bVc}%Y{U?9'&/TЇ.{d-ejgNwl =xŠAތgvri7]dAn毧<]ƠGꝍϽ4M^`35[qzoɍwo񶩟?ÖMtGO`^[m:[ c Z_ӉڱfōJ P,o''S]qb5MCoa/ 1ADU [>$ӧ꯻iJ2P J\ 3l“'*gO~'U яf/FZg%ݍ\m ;Qn|2Fl!sy{R"T&4FWϞ|vv0w0Eߙ\]S\wz/~7M_MkF޽y"|>A{ R/.G~ ?/oL=f4UKޔ̚?1}Ȧ"ۧf +.)vq2ͳDdHix2ILnP;iɥ~- >UxbbWJ[Y}RdQDRqShrq9B"iikY-+cŨ|2$VǂII@f+ jUQ,ʈRbsJR͊c`1Ɠ$:t_ ]۠7QYja.Hm$ $SNl qD. k9(H3!]N9/.D;2H{`2 ! 2)MV^2^RvI11% (O@t-`5 X z9d=Q 靑Ԏ83i;$5PdUhre%:s02ˣ( Э¤7) El k֠".VҸ!$`Ud3昜"C˒\ @$bjU0˴SVf a%F4 r-) ^-js΁2 2F%PRhF;%|Nd ,V S )ᝨ&&Pj qxSq%]lH,^.'E|Rs<+f@57]RZ1jwIlG7 1*a?% RIMWH tS؀(єփx& 7/©VHUv a*5c@pq0Ps8eVb 딳Yo){@_L0PSZ!u]Wl-sKK Ƹ;E]% F)eV$4/^  5 5!] ek0@Ht /Y oVGd1Bځ Lg;`^ͨ]p1B.Yff̊pNF&*)O N#M?6 i#졪o$Y=Pz5+*Z@L..HkTSc2Bg |m'V[_SbBGȅ1 a$bB x2V*"TOۉUsԭB,A}Q@VǃVx8[ %|XDuS$ 8E@y*N3R@l dNIի 0N+<.ؽ[ݖ{蝃LĂ.&JHUоb =NH6D4Aٻ޶WJ_!,d/ Ӥ"R4IKҕEZ&}vWUSlqb /룫#DꊸBʀv JpЎ2vP qWC3K:vU 9Ō:`ИXK4BiQi< LCМH۔@Yc_X*PafrP%6JI>2 )e8bAD1\ɂop0",0 E*`lN$JhƚvnlAGY "jTRpDf)4ͫFhWrVvYߋQWX@K @̤N@ fd(ƞ&3BORl- %!ΘI%nr%/H~avNAu6U-Ko".ń12 7+&)""CB%^0p9X D* _cy]ZF Ax3!z crJjiOt`@$CY"K]M <1:lQ7~с, BrW7&I(nGqm=Bge>uax@ػ)9y@R8FlD< x@"D< x@"D< x@"D< x@"D< x@"D< x@z< }5OA`qisx@[x@"D< x@"D< x@"D< x@"D< x@"D< x@"^+cA`kԲ  ѿ0矾{{r9/f N8LwR6.vǿw;F_hxQPY8}a{v,5W <ZIv2@Oo'קVt\NWp]-us:*Udl*Wz< '|f7/{^҇nMӍ("_^/kAǭ}FZ]]pneߨ[P[W??ѵF WS.*4ŵo={\kR!TvOs>[SP":%YժKΉo4B_zIe73RrRƖb:rZPsJ"ErXp}4\Wc䚴z$פԂ"W iGZ*!]5q?wդ5U:rW]9z}،"ؓM)~_ϛP&y ` 9]׈-$e]uzy8ͺEZ\,Xǂ>JmUqRb ;}4!{|ن@(G N/nai j ?]Յ400~O.ǜ=l߾ii_]Go?yHK}?On&?{|܎W22Z>|(b.IsQܲ{ǥImdQaM lmyl)To^le̔w`/' dxFwnn;۪MJ*|7=>r]c!o T$7͘-o ]]nX^h=ݷnu6[!8&l,XuW,t+Kc`px!V5ھB0`R. j+COVg@ñVh @)`` 9;orVͬT^" I;j8R"V:,)mW !٘* cJF.7"YYUĜ5tC*y[)oPJ˒"?M2:pӆ{0_Ce]lͪ0rSvw?xh.AAMՇ߻70[Vǖßb?bς{lk|%٧{֝n-=p9?}Ngpd4Iӏrg'yzf ݍ;wc:{{d1փ[ma{hL[C4 qڽ ~_ܼuˋu;0wfƒWʟ)鬤2fl,p1 ݹK{o'/SͯW7ZRd4G \ 6iKC$ aIR4$) -;mn&n-[Vw%!հg/>?15L<#ruɆש-Q4K )`rk/>WE7sNGUovZW޽ECn&3y(),[^ ڊ:L5TlaQA=W| [Qʋ4㱾 g7w_EZ~ٶYglЅxx,_ {lm`Jʗimmp@vg ,vBXsphNA=t* CBL(hnSڍ*[᳭U@lMmUǓu{t7Cڃi#`zU isRt+DxWN!Y,|M.,QK> n6.v@}9ʥ9ƃCc="#rpVC\]öު'K#K;xK;j {!򺜅ڹcqGu/QvsޛQ<]$= ^(L:Uh)ݨxU)zU Hy+RRC>S]R]_i;8Z}Be35_3q7֋m]Ɛڽտ^.Gy> :? tTA>!$|u|>Q٢Z5vŇ)"ӕ._/{^ԋq>C4 _FvCvsvmKK9+[*㟑]].7Y=IIry'& 6,2l[ؼbX텩Gz 촥tP{-CQ- CȁV!|\U/l>9yCrwgD;mYϳ\2+ޖ ++P Zx"#-ֻ{GlG<ߛ^+_3W YUuaZV97M~zlew[]/ǫ.[wJubtBk5rkho7 o EwF_]v:fFk|3˭Yؘ7ZEJ!{.1XkUNU]8KLH|_] v08X¦U/K)Bw"F2A/ Fi6x\W?`jj;R.;5A%o:T`!UMbQyJ'1YʜrfQ޶0dEUR:)HƤNd( 7r&[6%۝VRꐤ% `*nF)Ka//U⽻/hon1.\W _땷t:z&:ؤktr02Vk5L7̙K21%X;&B(s}H\:|$\-[+NFUMl򖃉ۂY߀_NwwGk_Tٺ?X+(|ZͳmcgqNZouXtԾ{?CvFWGE,x]=GS-42J{Uǔn)7 JP$v51eV*x^dYAS+mHe/`3r߇cqI v_H"ˋo OYCR[)k]S]TuiZ+IKX‚+1g𽾽 ,=` Ev}K >T~EmJwnM~7z^BE6qӜh{Nr:M6'hޓva"=cy Q~peG)KaƚUVv )ťSqS >ʏ^ QI)%l29ЙaLfxEEF~1^u @/`k !b+t@Hm 9^sXe3k5ڵPF;l3b~^ʋG>}-9 KUɂĔ#"As&`KgUgUgaEEZ;ʵii% yJ3a2՝(hJA;#LvNBHyc9k|!ZƹQ<8qP=$JՆD7:Ad1*72/V} kw^267] W- V;/|=7`Bɕe.ܸ؆5Y6 @ ʔV0bW! vmn ,paI^㉅!YXDZ 4v6f39k~*eX 1t)Aj0 CsI].[wqOz+7nj@Wl EB@0IG?jY* ȖXZ2$Ei0 e)ü6,d<EKE)D.D`ɘ\Q p9C5RF4n1іyޔ   az=;#C=ALi?mnoFC IaxI%ƔHU:@)rګ!{8|N*渴_T 0a)*0Xqc@`G=B{Z2le!x0MhZ D佖豉hj4BZ"ZfZ6 zMӄ ܯNUgRNZÁ5:\QݠPSXgmvE]x7O5Nʊ\JZ](QWޘV'RS 0uYnz(gz+veYϳmT86`qYC ᘧcdS$mlϚ-EK6E B1*" M (D\`DtɈ`BQQ¬pfB6E +7‭AVAEPƽuH ÌМ-[#g Z.=Q?Ϟ8sϳh\ɴU'.lk\yVj\^0 kABKeqm Z YZ?K )%8Gnҩ3~^Ϥo(dy]Uqp+x>-SVZ,̓60A= tH#MMyU*`iTBHXGy*) KJ>pXT+@|B麷T@L*#R"%p" Rʨ4}SI%P $+ {z*! uJ6wXվ:d(Z<GƐQF`]>I`ܾi$C\!T7@ 映ȣVHŠC֏!R 9[{ K 뗻_uCѷS޿,.b.mkS hsɦ,"èUg!AIR|60kUjƸY ?'\&w*S+~zHW0yHE]@La:xeX2S`)"7bNpmt8!x}ߛVÀP膴\Sqr+r=LH]/(x7_*K݋p_ܘJg fu||  ک[ 9G`2{Ap {er \̠T9L+iܺ'3#Y~Nn|;_8g#SbN/v;/빥z6s"`/* M'z/B # Gb񑮛!ÀXjfYBׯd`Ŵ{Dgcv;%'lNiԆg2ӉF40GR׷C_H39:UhIx⏺N]%_WHoOo__|Û _y=:R=7@7&?547*дͧO:6aϗWq+{k(}Wr}ҔW?ik`!/+@6,ڗ3,ܸ<3>{ W"Dxh N2? 1Hhib~0T&a89yї:|/ƾy4H)ʝFc /o__RǍb䀭 [X2xZ%MVvmZB4G<@4B!|`-R0Pt XAW2FE .CVat{!6[O5 Y\g5#iϪ]ܣw7Z=xjV񷒚bqcƹ` chpm#cgmI %ȗZrV:ZM9x( ׂ/aTۻST.UԄ=p [iTT%M"\Mx"WMxVC9B+L1tL -AQDP"gq+ɱT:qeXUCWJ ŕd ]%9qȕXUVJTΥW %+ Xcv4*K䱈D-c.s,]\5_==EpU$ڷ24rɞQK$FP=qE hW~9Sq]x=fr[rVϝV3N-wŠ?z.wxqqӧaM)fzg?7ԦN~NJy,b:Qiމ&1H\W\v42QmUR,Fq)S3WH&P|?ٜߵrl#R2XbK*ӬQ Ԋb. g /¦4:*Hh58ЃB2J 0a+Kp)@'ss):u: \R0ݪ\WӕLO@Ɯ7ߝ U^w'卧DDr\D.Er\D.Er\&66OXyW\.""r\.""r\."g"r\D.Er\D.Er\D.Er\D.ErY"r\.""rooE+1_:DQ !X29W6(sP>x:zgL@)Pʍ"6J:2,"1rbrX+B"I5rVC傄BaKL/v gI6rS,q|d'ə˝\Lxf>/&zT8QhLTC 䘢Agx%x2m ^N*渴U(q !8B -Erqc@`G=B{Z2(:o Z1c2b=6MVHKD5rt>e! F})mg/ UP?{]Fm Cp]%[ET@x]x^ͨfqJ&r)iÔ:u.oLB˻N^)\L Kh]=j]֭wtfEh.ͪChYBr̭wlz?yxuwkIEܲn'\H4m瘹"/*߂Y[FH(򥖜N)bxЄ5M|d%ij֙'i!N{6bB{Z>hg:/xb*ZAͷa.#5 E/.CA݁r=-o$Wrpozw|{EmC7izs1i,vn@pǦ 5•fn]YfH/H?R@Q>\G|)D%ϕRT*EvRf_4\o)A-{NCɢ% –RTgZ7B%ء. vd,A?=H${WMR,ZjYu4x1 i%2z9C3+APjꥱ**fQ~<_/M^R#)"x+AZg`D +- STQ8XiW)#X3y(mp ^RvXbA`*"\B `%1QMgH3T,W]MeDD!GHPkAy]VGd)r8UJXdltf䳖27kk$ebP{q ~;=a-i]lSѶpl@o5> O-@.Pt?2\㰫b߲W ;19^pvd?. y[Jrw^TND>3Ș XIL HyZJǿ:pot[Q4yO&@T.(qCvYI(J1,,VJ\T.jZe# SS]H降OvJy7O)=t-ߡ{,`CKx=2qLyѩc9 F}T@pf HȬ(ȟւtPʧ<=zGm78s^`A Lʍir,bΌƒbᝳNyf㓳O3פL\?u l fL#Y\J``2ȺocL\H!h@IļI !D%$ؤB:`^ew) t*w\lGl;+;vDm6A=jooy(m2h, +,USZSBl2*x.v4H$!%A=rldllʩOӁ ;cWDę#C`R`~Q GEQrmoUHFi0}VD #b8XG!1pz ",iP΁'MFbuM&Zg6.If\$=.iih%{|ġs9̑׉Q%3& qVZ0CJ8&1pE$^hTrGǐ%q .Gp|u5L?Ubt#0ZYۗf^?ȝ#X`T9F$㑅H>6HH0<)c""I i0g~2_{LEͅkW{mSq.'C=粋erI29 y J,KI Q$bX,SHRh6䐛{/ ͙,so[$_ǭZ%Zb{z fCF7bsew䲲{\#`yn45z[n:h[ X6VFt'pШ--\ =]0d:iDll&nóaZ4l4|턷ybJlTLqh>xģy'9jFnC`aʻgeE o k/@. h8^1#\ Hb)'zc+SW)B )%.r-tG Lp0tL68@yͦcqc1<{eujP{"P+ `5lRKIRqXtʿWby ,A*f1eեVtXT7;˜ͻgeeNدS0P0vY4a#A29H瘋Jx->c <{^ޕ:p8յhW-5ʶdiq(?ugR.^-o9%xdRbJS`J71VLvJLˋTwv3@LY@Fh1Bh(*f;PVR$0+̈́XT!ģ=,{Й+ j/_bk ηOLn y sjF0iC i$Q:6 ʼAeDHt^xoc=ٝ7R0&O׭+{ƾ#7`!,*_ &= a\´d(RdKJY{fF=WګQO8'Id*` XAJ{c[@I`L(oSVNn&ĹQjEw9lVJH?@r"gatd!X(lcxpA X?`oHMoi0O$a`=ԧ7f#k->{Wbf?]G\!O?Gb'ݏh4GfO8?~^jKbw?*\s2(PR+9r?*첶^5F\vBR l0U8['Mӻv!k=ayؠ{64hwۯM}jϲ~[Z`1QEߙ_OG|/7o/jozTDVK+粗{/ z$5.O1LjS{o[fawQ9a#*H'J-x, tQ8%BsV Nrg8 Kx."l.X@,{:n|drSKR Gfo;{|iomL9]\=^J%x K#ZGd5ѵcu8ZH81Y%iHAc,6ʌ ;(R*:i6rKZ;t< n|>mS\&v4zյٯŽ% BXE>0V"D%B \RЕ U"SD-]D%W=\DHKz0#DMs`&&\v2GpC`>eTyF1֎X2o\)Dy[ qɬQ6;BVW_7dEzTn6f0MyrZ&^̀0W}8* ~t(&ZK^jQ0V}e_{Hh%mtN&cK-SPmrޔ޴ǜ_B;i2 j?}1ߪQE SXA-e0TUr@HU W_Yfo 0+&<1_Bኯ، +T'MQ8`L:ƞc{Futv a-3x%P[zG" _fF.h󪘽L&ɚo_rGk]^,-#V,xf7-&G)=(r:'(j%>'8*)'ɱޓu] z \%r;j"]D%=\@" %N\pȥT*Q{h[{JT Ջ+ 06W@Ĕ/_^k6",UlJ *ӬQ Ԋb%3v&[ cCS88`+aBK7_@|!U4r'Asc!0E4 3,aRYMb`e|us-_|`^}k&IǾC;/Qٳ/"ҎL^Xho}KTT!P]*OH)Uy|v}eW\v90=Po)j6r$zTa~+?sW`F~2_WS/¦Af?\#Pb""zFtHXG"\JlB Oc*Rgzȑ_m~Y`3 `,ed6Hrϯ?ŖZd$˔%'"._*g<2*LZM-M` )ƋgMn~=F̔CW~vіx|x-GΙ˖ײZ}$ Dh-MK !Q1&䳜?j_hyM;d! Aw9bx ,~7,oWVisô:iCX!5R BT\f B;y=UkBƭݓsgh08/a5}zA@F)'E#p  'EH4cJt'iG D₉2rvH$rfpkF# TvTVjc ]/z&eaIzl{#?{iDž *<ў`4{\.d}=c㈚4]eaܺ׼3go4Onc|my5g2ɹ<{tr([rq甇 7?LQ \āϚF{`S݈U^j}&n&Wik=.H?|G\TT_?G"̣/{V$U•m,_/7!c0Z|Wkw!7MMZFڊk4!J6=ʜ{ Zo6{szep%g517krKEj_"^vAHHjq$!7t6 kR0pT,KRYØt֭2}fmU.f\ *62>IxsՠKzA{YGz,w: jT!OkY8{ kd?w~z?|?~Û}@oo? Z? Lg[I&A/mIqfhn>C󒳞/>㪒SnH0ma[k}ocF +d\Vs)B?AQoU¼S%|x f* 8  ױ{JclV?>(GC5OOWoF/amWl_ueh&ˏI>88yEfnrB0>Hu=B&ܤ2  =b v A2BG0\/$[ 2%bйJ!_)y%VC響X'be hy(Y9(0TG$i)ALX) Gc@#!8nq"i,>spVr))Cn]*$J")!d`$PO1&$V bF I/4S z}lX}"[cAAL%-1W/~Oe%@xUPHUiSb"Am//M/_`Ŏ/YˠR{/Y ʈRÕ*bZIs hAY (1L 1g4뽥BD:QJg0,FΞ3|Nk8j +}Fed5TM{tȇdo ;>3]ϭ]nujK:7[IX:W.]^?1^v3u=ܺ=.pE@~jb [Vsnݿmƻ;g٣祖!׵iy=|ah{HW37+u䃌Yo\f3mlڼ~ss矿>E_:DOFTjs?W -q䢄iD!T)\*·PtIg̝t{nNA+v$' Z;\%U9*Q!:B-63$/)`R69ftoj͙QN8e%#Ob!46x;X+g( g0}"0nD1^=^Lr`<Uψn?ْVqUzr!g$:{Sl Tc_"zvݠٻper-Ӿg\Wo[|д<\YDXו/pقPE^2+[MiNx Jqdz58\Ik)őeg_#SɻAc)A8RU/[>/TPA(&AXdt%Tഴx *3/ .OQ,, Q>Y5!x<9EPT*)!՜ԣC ɂf^@9m^X1C$zLybg0E`C1rv<4ْnG~?4;O̝3i$+Fa QQn9hMd5TXƈaZ1.Q!H')#)R+R>8\`,ShB$  @bWQ! c|B]!'fo B~NS.y)gR:H-"zF5`J8$xeU,0hrq.dCpYD Qx͸51t"Rx5W"Ev㓃fv{~sJP.M.`3ʻ78KQJ:<IH\)ȓI 2D4\'颚Q?Hޣt94riT=1KT \@w UQ$%TR5c1r8F݊RN qơօӅGՅ+kwyr3I{&ܘO<ş5`02eR9) A&&ZEqć|r\Ri *4JIiW O!3βJ%#&d 3?')v95 aɹ<];KhmF:!حBoMDnpFQ$s4FCa/bjtLFksGt,` H*CƱ@/:Q@$ALPcN]H$ sa1rvکG~^=쟍F,jDZX#N#vqǣ+&Tpm?Xm2JD-Iq=bN eJHOUS`)>UTThB@O ͝"0J{#gFrU{ԋ&S9uC"+Y;=L)S0H:Q" ANyAzlXEC!/a*`:Wq n!O)>3_⯵8TJh% RTAcAR4L. ;bιߓ#8=. *+Zޅv :*E>8M&研&L"&N}CˬbN Q\{_:2aA߷xt5nMWn?/\6e Qg֮Ʀe+veUYetrsrCu BD#B*Ԃ֜I%rJ:E%' d Ⱥ\WoqXޣ*wZYDLje>Us =|ߝdnN0;<`y֝Mcyy0֫ɦtQO2as&)?m ezq`^÷rֶ;MwX+; mPN^ז ^X>QxuZGi繈mG&i(,^5Ztm`?R&36Wzqtd) u&:סkr M o EM o{߁\w:'g҃Ns%"6`ZY` ݩ9C(eZ@Hl@)_`BHמ©,l:{]uژ{=~SȱCPK*u+}c`m f:ıξ^-2jTȞ/ru=,|N -K\- *N`YrܣTs_Pu ;xMUFZիx^9n  u4q`Sւ"_!Vz!2^v+S> C]Kvym0:\1v4R.7~Z}[1IQoW葛Jgj_Q1pU>d?֥RwS[@FIkH-{Lɖ!hl" @x}T<9eWb}no"H]Oٚdb^;-Z;B99:$Sj`@%NjR% Þh=QYqMԍPNXauON&oA9矿13a-'JE`tyᙐѻdI:HﳰO ~iFX5XV=Ucf  ]C@ЂG"2Nc a>xjg4`DIVؑRP&ƹ,Ag 9CଵyrC]쨖8SCڳ[#m}pI p<Ip>'>NHSW4 zr mr .$[6'pP OJ.XM,zz6kչ-I+;) @2 B \iU"xG+9R\w.pUgWEJz7p^O b_u{|V^%'\WjWk+.BG= ڡ,= &.Gs^=ZԥL/?n.8?#\_2hZ5?5 RmMbjRK)Q68Wv(o`qF^%ǥWȺKioד|lUx!֐Wkg^'lH]ys|f:=n{z{)0`VTO ?6/:7+dJjٚF&+k71MdOÒH>ջ$գ$U),﬛f)ŒQD2E.R3.Y,5Zu9'%y8 ]2#L w>jo0.H嘒l.ԊG|jYgw%v:Ư+&ւm,I4KX@xPoQ&˗S~ILޖ쎆y#4; )KѐPR0XtYnĘrݼLT2SS["X2x-iJ AEYYW¨)Ƞ)єHZbКdz!kBT$;xi&]&!$U&|cҁf ] u&敱L <–\r6!FAlM>;* Nꗔ{r`4tO:ƳEn MPNs4"(3,ʂ 9ZDAǎc%_b_y׺eE.U, ˁ'ZbnL/V4>u[j|&V!dFFm :Ɇ&0$T?f%.{%nժjt&%VH \"#Z}L3љ "G9x'SA..K0sõ3s'qz˞HπJ1 *Rh $S4 x[u4IfgN$TvT~VP~sL2^JX :a+/VgOeUbD/ ۷&XT\Z2NͰfvVn]txGץKJmO LcrR 5^=JpC%3qUT޽|wLH&)JŰ)ѤIdcuE& a@9"Pҹn +n(66e2d12'Sfhl"ٍa:e_P9l׈J&cLVCȊ{H2o dʂ!SfhIul&I2 $Y(![D 1Ȱ&k'\&n<\y{ǾDqk䠤F4;!h02#sp,k.J@ve&O\(G/QŪ,0$8))em@D#y.8b@+NH,i^]v& ^%zՁpq޴էjd_\q8vCRN:C` @P~` AD:Ux\ V⡨b?<<ݧ'yZEYΑ;WbUD.!wSCj#Ǹb^"s %=/I}`%) ս_&Я{9|Yqtr/VxRu֎3 m9ήov#o?_/ vylsY|;uW.߫l`٠7RoVQYY\4FpE$xq"- `k(x<zS&AA*}ÉΖ/sax{$+{jKբ/S[":۴7P/3Ҭg^G/{g4sPA1OO,AC;Ċ5(őbk*˱xb/jL;e*%ԫڴ:fWtm%D3PO(d |@ux=y0 IV^^w'XlGGW1sW>6HN'w_M;t=ng_Z{~lX֯{rAm{ŨM|W,0@2=-7ZO xةK6&"D v2}Ws94_wE?n4,+5vN*~w9*f.;t̀.r"/' ہ`Z[mj13Yiqwۗjc3եMǪӆ!BO,*t Vj>*ƪ eF{)~֟*૜dɱ$$NwMP5\ʃIv;VH퓖F9P(^"%LG.ʘHs&JtsֈoO9An9ΰN^Z}T:,GR!/w"GFǥ >Ex"t2k)B7[\J3лwEo=ǫ\@d1x"$#9%ѩ&& ѫC:;`{uWx;ֵ1n2g&ہiiY)i=z)m̾* Y䶑ɋ& ɩ8-SsgxAz ڱl9 'BtV`*uR{ ƔSeD&Ί @oj6o{uznݮ{p~v2yh|s 9툉TK#o9}TB (2 τ%LAz7VQޖ]E4˥7&y`N}dFX5XV=Ucf !JGT hX#O|10<30f"S$c[M)B(DFRd\ZJ!pZ<C!ծVK) !Y>x@$hr8 ZOK$8PP :xE@k <0'ǣO7Is+e^Q2D2Y'Unf97hԜ)t ėR,&?fUeKzbq^O 4%UFֶvWǮzt[\^_%\)to[ qU C63 NYlW`tPDh<$)ֻHlۜ<ùB-<)UFVl5qvk<l:]䎔V{{ՖB\'>Z.MmW ,KV3šM.9]$-g5Ć]FDT5](".ڬF[)xo@#軗(d/ОpQ$Rox:i]mo7+BnmȇI P臫aeh,=$KVVek R$Rpv|fH:w M=c~{j3ߣ[-;$AD> 0E#)4g%^]u;Nm|2ǃhVeD.42NpolHҜKPph-Mђu_7'c;@@d \Z"m>iۻUKh,92YcY>b|M/ i|PHy4I]D[_SAd_^ ))AT E>49C FEi@iB9ϔ1՜ ernI7.sDKșrGrU&P~b1įiKn]{t%t$ݟFe[?e|z-N8U @| ZƉh=P݇^-ݣCRI *Ji8.qKb@b1; ΅3t4C 0ÿ5Ȱ*dž^G3w6u8obn b>wD|^\/ ɹ2m@LP$ápG#{=^A^(gT,PFV(D:s <{T6)@ 1P 5#N{ #Nto{*1E DPJUn~pX͐d8ީȤpݞ11Hg<#N$6P)謏 } F9ᔕ@TA+a"$k%+)$`K:D͎;E&rJ602qRNۉNht4j_r &]N-yf$9eaBAԳ0!B璅 R?,Ly$/91T>|TI;pD-V\L; ի@/* c c c fIpnH's[!R4|N_ӊseS2uHk^T(g#^sB~?r`RՋ`omR1C1WgM]1$Dia@# 2VVT8# Fk{k>41e2 g2I4Ө #rzǘh)BdyI@.-L%Ms3-\3r6y](v#ٜe|eJa׃=KCWB,丩L'gux{uFzKQV:dj6 (GJeo%gx"} fg?*H єjpœ {`Jhx4zZj4Z8gXLB%%"a9P/BBbLN3D/[ ?Ҵ")4uH pɨ$!5<(S)"#Wn(pɹʤ\\]Q_c`0swׂDž*hсbja5=q#rk}$]8ɕ2Tg.WS n^tJjSB5y_'OZVlj8\3&5h daPN9ҊsÎ*8˝jl1-E .݈U^j}V5sr%2diAY{R0v JZ{vOrpd%٧Ŵ^]c~u֪|"'lT jh2ʮv/p8ΉTT 4=hg7#~u4%7{l1J6TkbNݷaFãㆷ^vQV~ԎcWΙ; 'eekOB鴭 NK, >*br\QżUd֭2 zm"\(:0ӑܱDu|r_i=tFy,7jȚFğMcò)1O>{]~sy7@Q${ F׿}ܼkKu͍ؤk娧 ߤ_[>\[0_[k`ۛ2Vٻ߶rسcH t^.ڟ/>I)w-+>"SAdCqf!h^_ԭOtntoaY/gdgqT_ʀed [׳{(8b9X#=D?+*Kδj ; 0nL?6)L :7I jW]񇚿/,:j|g[APydE8IV *E{v:Ѕ!;>wSUv+$8Ƒ7.+/u֊r"]9c5kr8%GE(9V s&TMq|y[?0lN9|;S!dv{Bw, ADzPݍ=C1@@Lhl2:Jbxm:vQ`"

im-Zc:w7]^?gR1^r3knz}[v~}͝ϋmRoqw}3o3篯vYyDnx^}zwzM\<= 6/?|2燨>bK)6gl)\f9NP O_|=M$5 F0Xi*# E+qA? }S,tb#$@ۢ֫9Li\XrP (ؤ ԾDh̒%f&O DQ 7Jdui7g ]>yyVƗnƊ }AzH>&5&0կ xHxUPp2wXn.Zt䌌D]QYuEjNVU0+f/byQ.䯢&"cxfE{\Uޚ?`1W9;Q@R[dox3}}fVn6y70υ/g>M`.o׬g2%:[|Z={&GW?v6<+({Vgϸj͟x6ݲftE~beM#Z "=$D;ԥh*٠rb^in!ѕ:L<x]\_TFbǦT;X4?.ɶh4Q %:ח3dT}t4FPL\DJ9נlaGdU,1 PVkoL|_?v/;k>ԓ8o0ɢ絔AujOMs ?Ejfɖ z3ItP)E$*1-'2JZYk IG@NQ@ZMu*mnjqpw=tU~j<՗#ǃ^^Nh栘bO1W"AC{kP+ZzWӹ1LmW S]&~ȥYŁlm[ظFw7T+wJVne |t]msisY}YQ,gcy{އf.&\{i~~qHg:EθMNy=\lkTxnzᤳ>)GIU&I^ja-V~THjD ϑ.1!O؉aәP˭-l:y[usn9H>z8tAl$l½+iݼ÷=׻&߃U&;(:Ȗ:Zt)82'/5DSN)5ݜP[Өc=d xIu "&bȞQH];$$[@ƒbl F(>1Q8Cy<1IQRΈIЄl F.o/17{t:`p4=r "[N.V~%x*g &gj&=dДєZYJvPD,ze]Q.\d&~ Ѡ7-qlv:fC+;)bkX` FWB(^8ʖ,7 )z5*:vJ: $Be *FoUt:CK!?iAi9^Jmt:6ʱ.*s/Z^qŻ wmU5Ar-Jl-\LSJfIݷ*"8®2%8^g 'W999;Z9{+(5N b(Pj٤04Xd( gz^=&5ݘב]I<ٜEL&"E%1$zʅ<|JxKҲPl j[AE3+ShDL"ƊL݈͊a)SHKi6O{ZJsv{?NVJ4ʛW($Y^a#k~eϺ`I}5 L̮ZrM-EֿTX;LEKŦ'J@RR K $ ֖8[6*la-m!p'M>Wdܓ8%Yv'5'9660L|N1dR2Y b HcK)(6h-M-F]{!)VljXl8-6]_Tvٱ^tka V{@}3h RKh%E@)bP:հ5:AY" "2ΎmM"ZFm_I"VBhlNűjO"ZD""q1H^kK-ȘB^q+ W\`,TRhR[BSBXgZbɁ*Pm1r$$DEuf3q6[??dxQO:͒]jl`>P)%T(ALj"Ad<%CR..=l6;v=Ի= Ef>rj?ei> rF1z" n~ !MPٺ,lHcϮ(S"C6ˠXϗ#=-81Gmb Y&c$M>AʬӥHE'/5{ttԽ3S0U0{şus?B-aow$P -}Ȍ}t_W-JhbwԱtY()*7Nw={˒BЈRF hd2 I 46@gڽvA/~Zrg; #!-oQɽ!_Gv_F>Qv;AwU謬uޤl=l-.SO~=E^TZ6^{&Fia4YtAKȡ@tlG5q]5({g +7<>X}vdz)bKVZ*.&O> 43f-AE*7(o坣v4ӥܞCMG77P|;7ljo ( {oޞLw%:2D5əN!N~%D5x*53D#I:O,sVTLRɺhڄ}4C8 H[XGXʱ| Zl%f`jN !x>YAۃQVp hq'8@#AE<G@+SȗW |+EdfGR8f:oMb#:GzAl$c I TEg4 d깓zh-dRA$ìc:ZS@Nւʁ)H)-lM r@%&Poe݌ޞ@|BrƧ+l.8=2.||ي2=3}obM!Nr 1T AްJR`QfD1o$A&/F#{)A!`& KR0*,9H2JUh]s85Qxt}=)}w~b-=SNCdyR!-)_ݟHHIa\G.+;LJuN?{Ǎl_vؖfрqu&A~YC(z4̌l+bCyH29d5ux,V"ѪJBjw< -+r[5_]G"Scn͏rUc KV%gu!IV#D1 K}JxJ_x8p=N}0:_Z.{4z,&HZDfĢUx0cIO bqVlg=},!oU9A9^K4) kPXeYaUW^epJGCE27)wB,qX$"`ѤyʺUV{ ^^hP,i)w7Zv&;dW"fd,FuIc@dyHCJ-e0Kh<ҟ`09cStPgB;%_V\㊗gx/i"H\Yj,+"8퍾c8FBޟ| ,*09Lz{u}eq]{ʤiMI>ekBS{Oq c,N1L읰Ck2lo2%<Ák?0Ju.pCn%DoXKquH?kÃ2p8-sn>.{ǑOz7/J'<HD=f׼^ޟ_DC0C3ek/nU\ӣi'81~JsQs=>8RQc3Kf2#.:tz4"9A\ sy?=9oܟy8kV[!WY eс ) k>ctB{eE|hr JTjhLsVY8݋#߿ۃ8o>KΟ"{ZvVޢi jeͧw6횚֠*{?~};L'񋨧Ot:5*UzziˎOWJRiB׫W 4<|̅ 5C\l>|-Yd64J12`VJ.s]P *'*i4Bm %XȨLp[Ci)q2u@[#&\*]qT~l>җD6N1DƜgGM~B2ǟi#3,4F(fM SUr^aG(B/<{*YC1ƌ1aUi &Rc]ɦqtC#n\%~&п5pҔǦ^G8MzF8b>,{3rS~=+hω44*{8a aQ.i٫쵫 A\0c4c x;ʹ}uCRW\<\V OƱy94( 2i~# Pj,6"m6#BNPv5<ꄁ7Y7oO.׹Kd@F`;n]3cdIFz bJ, 0yHh2679ܑ RMZ?\%A{yH$.r@G!u.$N!20^y`̕jl!4]2 e) דTl"t-ISŧ$:Z429QZ>_`{0g /g(O5|],P4J8U#^m& 6t밶s7y7ϝvm(KY7g]WoGx~;)+I׽^򷩓[^Wٻpel`!]cüw:[;p2yr@]`a4z{TQ'TäO8+_qRBeNZb2F Hɚv 6*'T;ܢwȹEWZ7{vZH*d'2Dٳgˬh˹J&+s"/A=b%8J[=;NZ1q"#30PFcvZl^[DK: lGÔYzL`2i9NKpdd7Sp` V1a'ݸNA:MMI1" SEp+tE0s Q? j9ЌZc^[5_BHa#)2]b` L>Ip+{L)WF* ( iPR;#1' RI/!47f'2tltl;ֵe,2͋Ar, i,ؤTX!32)ܨXʩ&0㸷(T_!>v?p9 @傳HCt1%[<@.cI1{+A}&:}od\w`SD;nz6K5dC\r6 ne$Q(y`1gQX_-*f(ٶ!3H][B/")RKM)X WK͊6RbWNBTjNQ/>T]:H\ $?bdCͫk'і82*͢Rxr"VAG@I| lUd4@r]&H%6+ArH>J$R8[V*հȸ+X>(^0=-xff7<>9yڛ;Vˠ]DSTΊtMkkse xb >smM 4Mdr n(ƞO6Ѩ`T%ɶc +QP;0a-q6#v sWP#j_\LmqEVth kh #^̇ub&}e<&f*K>OF%c7{lӪ D)55:v91֭,sa1Ey^ˌN7VOW3NemC#hsX9ANxr2](g X=Xl6R;.K-z1%j7ē`/.Y+&唢}n(E$Y38[ċ)tp3wW6)|<"3Twsm-vPVKTҷXP+gY#L9>1 ON)&}#I.8Q Yւ)#-+'+~3OR=+CgGy$ IR1L٠ p- edID#Szb>;:䩂<8(J#7:"+.@eD:!d!sǭp1+,Mbմ~)Uuo2XġMB݊նa Y/B׫<Ɠ 6Zٻ6,W~z? ';cl&c$0aT{o7I5%R"EJd[(Vwu[ν ;uTPx54FnhpT48-onwqnX4nӘoxÈkq|::iS0c0'0 8y9O$ rԻ5ժ`!&Q)NV`e&nxgoĹLfѣ OobkٯYͻҢnDmAh8(%*<"( 7 8 vxa?:q|\㚏]z{ aql0Rh\I-lDSldGǀPq_uK9Q(̱C Y>N屬ۛm4}£c^إ_uQ^wCzYj.L|N~Ja8X Ζd_T'Ub'?•.3ɞCh᮱Q .vB+yZ%6;>f b@()~%hЖ]RxNW?p\P(# J&{Z,ڦioUȋby5_n䫛X]*~] zywv]pvyEfպZxf2 vٛfUb:r CaѺN֘] xyN|t)6T+Nth8vBVtp+k]!`ACW%ƨ+Dԭzte4&1tp M+@k)=v`tЕڰU:8]mvp<,]mPr%#EWj R-]=&Rƥ-AL9/gMmhVrφXDFdW.~yU6NJ3{FTgv OB<|kk s:ˌܳD?: Mq:0j,A|Ɇtg(P칳p<|_SλR %".ND*_30ѲSfD-N9<#ylH,~Vg[d>@ 9#] ~_8H*]^_07qk\_~F-i>/Xb9h[-EaN&˄;U,Jޡ8,$ĺs;1zV3(mҕJ kup)k ]!Z~hK+%-MrlΤ)LS  QZ +M㉀%o ]!;Ҙ^ ]%lD, i ]!8=]!J:/RSiNZ[J!ZfJoXC sUv mmm[">pn+PcCyl84[~lSMQ + m ]!Cʭ.-]@buZ [=P2 +AtaFľ 6D;]JMxKW/6V7)v%%7.M+DԱ亥HWPat KB5[v?Bl ZrXOn+hv(i=V=R61tpk ]!ZI-]@bP*DWX=t|; P5`KW/UQV;P)fn4F%`pTS[ǧ7njh.D?Y=!oR.EcI9ѰOSҁ=9c6tzql.dyk=LQʢx98y>I\$/:QwR o)S"ji2&IL c.}M`RwFE:׃px(f;!>[sxY:y_qțy}׃u8sI^|6(ʀ%_  񨡠k@ewкX2xWpm,(|'|h{w+^,_?1ݟw7trFJ2veƒF]=uIy{pӴ?2+3tC]NJկxv᯲+_ܺfE+g_zl:({9t5 UjKt,m^=lja/՝˗.ymqFЉqc:JtHk|T]lR2D > QG!{ELЦG#,JrkE\ ;n9؋.śk^B.w=)/z}j XJG@}9Olp( ]l,[~ >x``Wu`eP~ԁ5V7p5]kqV&-UڸjǠ[[˥A+rU]X7pcrj؟^gxށ\T8cr/=Yנ_~23}O YKľ|k L/ YFn٤K㚩|8Y<X߻Cv+=<={;7pʆRf~?~/ɟ[gRC~亳̃ム+ٻx/:L.ShjS}5%c|Ѥ? <>a}v뗲shd xr6ݼN :|aSͥcg\SM "&g=]Fԓwݳt(sgyh+)ylH&DsJ>Kl"тomTĀ "FE+=11o~h:80JrnH.yOrRA~tt֕׊ζUZRZtJkRNW^ͯF R!#7H3+J%+c+v[~Xxdi]'@j5C`AE"NYR攲6piMV5x͉T&jPNQ&. Uբ*(t*Ha6'Brc n<"YuU9sK[J!Q/orGG,ޒT]X=D\p# jw&+=cw/ƫST0H":Dzl2I's7۷v't2{Iֶd'AfI.7+mȿ1ԓ7pe&ir3]%r=x5QK'U93{YN U ^֫ū1՘j̳y5!hV-\,yԡFU8 V4ĔIO*q.]%pBxq`$PO1&{ gž<,n1)$86Kf<e^>Ρo\:ϟxl}"h~/yX՝`+}$} S$}.\8^` ([ףKA~)~/>&J&H JspA+rd;B Z!ލ>-tb$Ʌ˦i#𭮮C*w(,W.ibA@ePɢBs'9'Yz)%GMT<ǂFq*D&Q)ML YTpj|LQ[.*Ms%gx&s{V{B,Ȼ__I LJР>f fl긮wb*M-0RO0t:)0p)Nw&JhEJL&0YC^[p!x d)U:$+3&0S$GfL4"78:hǣpRW[ͽƶDr6Q:TeCovTKS.yf>_\5Gf6=Mlzk|ҩǯh=xaHlpiRv{=dzas"N+ 5I9DrMa!LWc =am*'PA@Y?*' L5qﹽkZ͛Sg'?],~a=bsii?-7wsY-o'w BFRF6$#]6 AXjfu|pTc`$ 7ˉލٿ6\ֺqTF6ȦQU.j|.{$,q8E^_;nXt~Щ5yg/p9~?}~zOW+,,h >  ~}hCJ - vń2*y >>] >[ ⦅S o>.Y.>gwOMte^X~m~y[/URȻ/C !b?"^ՠP(9"d#)g(lNUpzݩ;` X6_@} p?7- ǐ?O K {^dNm;u҂ Dn%|4҅HAQDx)͉/5F3Z2жs^z7$/ÞPD)5 8k@ʤUK<B9˷#ڋk;+&NT&$nKCnߣޞ7޻'C(7 QIpsJY*૑" rȯ* uyqE嘐Z$(ԜE8C&9' L9=9y҃!PunPKSVRlb_/ӑ>;vt$vĉQ8; Y}C{Af_YtU9^ Ue"ƊQI/NzH4Xg$i)bj!a@#[nXb8୍N~\Bzf8rKS.wrqY CQ#QLTOxgշLQ^zl H/0 'xN;eN٫^^R;'+e}$!*bI?2@Qzc1*p@M 3uP!"e`t<j[#Ҏ9;̱5z g:!|fد\h'Y|k|]赦5د}C7eETI1SxGYݧ[{JlFӦ#%֧f"C3>e[Wغܺ=Np6[@nlb [Vպ{AwwPg١畖Cjoke͏i;WtЖWpUWzC\^ A6wm^?_B[mڊG'P7j5Oڔ8QhD!@* \PBrin5cr*1E2̊,:D^Af@p&UIaFRd h. O!1.5.+dp*#ATI/fBs\nnso}\{a+o&!9 n"r-/ISٗ:Z4uܝ(-ILWW5>-W*ğW_dIBxQ&An޹**Ih#ωn> ,=Y'0dp ˣ+\n短6Zv["S߮t{Qz¯!֩czo$8߽@o{!-X3՛.~lW\",#e joԖZGiGDVi&U~_H\(& t%rpN`4TX rrٺ٢Y٫S&QC.5!y.>I(CRTԧȟ"+(Ds =Նkㄡ>PB}.NG@(Hp2gx47ʭ´j$F8DEAߐ0Fi)|l]7bt;=;%ஈ)R[!)|Ȁ~½t 1O%(5a@Xy 8|lr5H+;t6M2P4:S/io}$3\JK< O8.R!q3*O'tX$,[ko(uGqG֭J/ׁy( 3ĜP5l'SM5+ͩTSjjJUS}T%؁m/;3\3b% UT o+Cܩ.m3Ѻp0يpfaNq1E`sܿj,)he'9HM}w9ǥs E΋݂<(Gr SM9R蜣dϨY{NN9L۲QmXE2ǿKe(n͸*'Ӛ/LZʜ5OjAJٿ8Ubd/Uu竺I\^Pe0@S hԹ4B/W 7| RH`<N3\Ȩ_HzYQ:$X1@J(-fل UCG1R }BX&G3q1j?oP`uFl,υA&{MJ1"eI)|rkmRiehAI:iΞ pg6^ Dз#`CRk(ZʜFðb=Q{5Ѽj3ҡv`7x'zk" uI2Kcb睈IAfk%t,`n(H*CƱ@/kB$^+$1kBL9vs(ʤ b/"ˆH;Dq㱄ByKo8 u&Er$ŸğV :i\(+ eUV S$)ZqADf2 Mܚpaa0qX+/mVx91yr1iJˏI2 ckf '_#կl}{SdoؕZ!_HM ^',6+@=m\oqQasS$u{E#ab{.l[}Tnh᛭;6NpW\IzAw@ l"/j0AVۗΰMm.za2y"^5f@Kv1`œN+o6߽Ub^[7)ݬ7Γ:HA8%,QKJ DrO{HT€؎+YN˼+f_̓6:G*pxVs%"&oZ ,ȣNCr(> }:qv5nKZrq꒴eya뱪9zs=G)?L%R:Pl05T<ʶyV2yzu4 U6)9ֲU \% U6X檓aWZ-WJ-•iuBpsOL \ekk;\e+%֛w*AiKqdzfWO&zвwW pe:3~x? ?Gq?7E;"I1y>'+MEa\͉C/FpsK|B+rQ'9HMgnmJ&Do3THY"!(OIppkS1Ux9;͍*Yc}-ԫQ̦)gRK-s"9Ϩ-F=xy-(6BQ&-GPH4 Nk#gƦIݔ0?Kb|Ӱ 0,z˟os i;qo[+H4-5)b'sqq+9r~}kt4js/$OOG ƫ礏:9jF(!9eq8ezq, %QPElB%UZ3#gsbX.,BSXB ^*jQgzq j{@ݾ94{j;nS<&ʚZ[cEȒS`(D:PITJ*x8 س,MW)43A'ĐlaT ` 5v1rkl7%Xv1Sk3RVkv`8I֚!rCE]X`y'b@5jpn qA+"؄eIZhE{5W,2 #(`]H2a>,Fv}󐐬F#}5"-i{XIΑ$P@HΐCN"J /CQh@8JIJS`)UTTޣ%̈́VS7]#ޞGd^ :qɾz֋׋^IHIߠuD0{4B4AN!(*g\d׋ЋǢqǾև|?}xw[ɼY#cc6AnNpx?RR+X"uKkamv)%e)J`N_^6[?"Z" N@4`s.B"jf׹dpSz:98`-},aL΃s5uRڞmN4A~Ο(g-̽2ʈ$ X _,IR@Jctt>r*H^gl+^1T(1YϨa'ٯdeТcЂxlu^l2ލBA45 0 Z+j!zC =[jF젫AՇܥ%v&K/6;ݭvdvH:OkC~3ծjWv >`b,0%@No݇]hk,$ԛjMlQ Icf eGr ]=e,c+@i@lk mYM杲Ko&oe컕[|- Ga%`"ħdʽN$T€WПdْ ;9م9R@wzZkՓ8>bSҖŅG9X]S.~a+:<(&A޳4,rr$ ^KuV X(r:Rr!S;L̹){bwU#sYzp@i'cW5|v z!:TĴQ - DϥVj,DFL!Lh.q?P%1y  uSiCEy3 B] <)>[\BlZ}'}Ror8v L9wq֢-[]z7OU$;{Mzඌ_#Af_/(qKeD7nzԛU'vxO3xG椻=s) ߢY_|-^xp*sſhQ']>iǑ+8j;,s48*ݜ"4Gh(S#e*ǵ,e9k֩r`kΫٽfBB |9\t2{IU4lo -c\,niqVoK6k6~:Qj~j523'=7oC# ]/58=y]!7VF7j?=D,paφ"\b=;F QݚDjιѪؐqiSL[_5H^IZp\/.Y(DH߼3&(hz`:Df.-cv`=vK xbdf;SYd48o~tu56"-~(Ҍ"ө͋/q` hTyAM̎ڧUw>ueOv*40͹BR1WjZ t._N025mzR㭵5|;F<䭍e%זٮT;ƞ <ҝ{0+`$G0\QPhy8`[`jJOG;gT~5$" IRx.MNeDzo@Ӝ(Je`j+h5Wku8/ǔ~B[ #|Y(n7仇څm.ZBP-Tx2 {͒ s}^c6ip,Ruӊ[5q.F4t:iѫ$Q>144҇0-FvDC8Y(h޿0`2 ,3]6"TwmH-acMMmߢi_Z mdIb;d)JG@\'ų .{g/!zr (S(kTbH%vLqwkc&o=ui] **O v z>W`ioaȐxq_diR geHC6I[N/"28AG<уw9$[+FzRoY[gߋ>= yMŽ~>.10h5+N%pcTrңϚ?ñ9P,pWNX`5DmJ0UOS[n3L `כes«OWSΊLъRX50xok$DeySzz8Hϵ=fH@H"G U~חG!wXn<-!=mk )JD!I O]1'b(GK}J2Cl73!J]RX,Ie "| +FPhgi~:cI[g-8?=:KY$ƊVj`r-nl~˭Eϒĭ۱<&4΋# Y.hU0ˣN11iଧ_jA0IQXZ-˙jVE-p"" sKA<Xe;p*S/홗q︔ZYw  %"eњ3!K޸1#c)eo04dn4kjE DQz7F@Pj5,ג1'(LN;)RTBݓUng= Ҋ \0R"\YS,+"oq^mHYߗ2>̨ssvwev^{ʤU>$StVdY8g\2q>doxG# ј9\57 W4rttcm /jno/(Ϲe: |>^={7 ɴe/fՖ ͳgST^M;ծ Ij_:ity,3zƗa6r.̕ϽwMJhۥ"jo':w?o/ckj2NuՈՐYZ[`yC?m`i2YCuv /g][`kZ׼aR>uTv4p#bM矃X~NYyglyܱcP` 54&~j ۝ysׯ7_^__~%xq4Z`\Y:_+[!zU;VT߬j jeo=m]5mz摒E5Z+te?bw3Z]MŬaIؤWMERsU*UMxX.}HmƖ9FtM|X GލUVT5șN]h16\1YVA~PRdN' uN|vڏ}3 A_Ο6#$h5a+鐐FZ'^"zV9+vZ-{Ց}jUWH@1nBe, %J0iP+I:XPQĕU &Ǔw #[ #leҝ%; I#yyʀЭ#?j5_{jQ{0Ӝ+8|@ P.VZ6Z*׀b `VJ.N^-OE(mUv!hDJA*K"/c"gc2BЌ0N!ݸIKp4~5bǑ bf9|h{kcӶFf;@U~@[':z"t$/NO}/Kc^6RHV(H?`\[vWYsؼ~8˩^<1RO.4&4`:)!;H[Ӥji8E'Hg;u &K &I\%_eoHM);M~{R2{A!'I_ʰgeKtq${^,n.1+=ViUS"1e3~N1%FU^Nk}YWͅgϟ/:#Tˋ]ଳ!uq BH2Ƙ 9fE&Xy:99e:3!rJ1\ܱ'.M?y#4wݎ(]Mwk/~l |e~--Eۻ+"sW@RaЄM-rcdrp\JV& 8zJ. 359͜;лWߪCDzܽbk Svї>{^7~ٔ=|)"ۋ%ɸkB 37"U>'t\>>9B:yпRM |zK:iI=V M PՎY+Tȃ;܏nGBv-cIHD=)!Dˬh˹"hMdCjYv+"Ҋ`xuA9g9!09eH=&Gޖ8ۏ"<^7iM6X k9ک`INp>@9WNFE| }4x4W \` avD M!:%p+tE0s G &?<=u5j hF1b"G ":GRd5HfM&N;BHǔzu!p?uY%! ZjgDUR9> RnNshv"*%IǎH!:Hu>Mc{< AOs\~diinbu6߫S7?läDAUlp"c(*ޢ x F8nF.s }g,Ct1%%z3@.cI1{+A}&8BEFgB;$f,!8rYJf.s"w; ´PҚ8;͍& z*µ꿎>GrY"CQ[!N.@2.Eq&JnT-Ŭ%S7$TvuI׼/GT %&eێՖ8^zݾX"6NI}8%sS _C,ɴtzZԝ>MHZ[ޖ fC8A̎*o_] >wLuQiU 9ep VAG@ +l_!vy 0dz0B$K<&Y?J$dڶþhkX5[x-t-B8ƒ¹DƓ/Rs~۾9iy^w4-6`JaYimwp,B,[!g)yAMQ2ݠ: *]D|L (fS F%+\\EBAug~Zb[ڂk;Y86TJ(W Y y.Ġ0kRzBb6B@sE Xth$d G,) )bMakl5E(h,b[l",mӜ襤.Q<&fUe#x+0',WQV-XI^ j%r#A!#!88Nr6q \"xHaUσCRP Y4{U]]Eb3[XA`Ir,i$53Gk;~5rkď^8+:&_gk\^$-EN/n$Dڥkhw"bAsls _9 EaBKB:xx,5WҖ!O@Hkr;b |/8nFQD=E?>SB_a z{e=Eߓu[}&&eڊٰ@04fOEqּ}霚H_l$~ i Zτ 9 jϩT)`f JJ)*©nj X`T9F$㑅[L)#\0+C-[ cߌGnMXv-m0ǛinE~Oo%IiSK#o' -4\ZQ!coXv Fs!!Th]`U2pީN#D#$^ظS bxŀ2#ҭe CutZAmLaf2 5&ܴ Lz{7"=A/X{)) ;e Q Pڮw9;ov`4奲~wE5e!Vh$Ӽ`äRVkT:{f⥞T80̘\aBH`/ #Kƒh$2ЄM\'I&RUB_*h]G!mD=mvaZ^jC'䓑 \?@ M{lix4|<*>dMG?>Tmrc\&_gq<ήTY)qY^G0I߼-*LDÕuUϥRъek yaɦ7T+n,wџ^]/ؤw'b\ƾge:kF(>di$EwL5La>ioهw#" 9 feâ`*Et.Ʃ4У໌!Nc|_J?S@z$]7ʳ>7/XS?_ЯB ;ݪD!׾sxK-XE\#X 年UG)|XσW2TED=0]E[Xf=R7EbъbR$k-%R'F"0Qejs>A\+QԤ⚲l02ޚI)A,.JA\m0zmn~L@YOz%(T;ph|ѻ}iK!~`կ.FLR٫Gcl ny@^z-穦^Z`%X-W&AUW,GUxtM_ L3* Oƾw{ \Saن/|B:ts-U=Ȓ L3}Z[D\ FiPByΠ n K%o4^<^lA]?i*prG-Pݼ d]bj4|݅k_*.N3|Bd3z=i"o'?vƣ ap>ɝZǥ#4Ueiiĩ"LiY]t1ezֵ ]RHBۉ1W\imqH*o M7|>'Haa&3<0 orh@Q\jwh6lsfZ3͸aY9d&'\_akXvrgn ocR_MTy1=^v@Ӯ~hݹuW+W[#ߔmQ_լUh` f( cd"VadD!kdZ.qCJ[h!6 P<ǝh8^1#\ Hb)'z*0b+SI+)B )%.rѷxtvs,i:Z㍗]\xsj95J) ]*IBbcR<`Sa1[nu OAu969%=rJNXYؚNbz):uD$(wp  O=cF@E G#,8ZI}#|h7P`w1 ޭ2[/8P0vY4a#A29HsE%Kv@vk׏ G6ef3!-j  iO-9sU΂#"ᷠcҁ^gQ=$), #4!v HSwN+H`V #Gw'Nc/-^D}5W΋E>.Il5Nsb GO#ҥ]D7:%2c"U$:nd'o>ٝ^5•$"hepոYA+|7eI.J6y\ D4J2,%9+rޫ+왯NKo*{ >(ܮAnl0E<['UUm7(y,^Jx.NP|^/Gŷ_z8_T4+;x8)) ǬfcfEP{-奻Y+> 1dH^gNϯgp|0wff6In8g!l %7ih 5(O+LwN8aS9aS;aԓ9a$2Jri\I|- XHdvR`#~FlP)Pʍ"VFSyYqGkZ  ٲ9Z3ȋ8nm+ڧ 0oL֡ϵq T{v0aEF\w$b )r-x (}Nl =-#D$RH'3%t*)Ehx)X@,LjMnr!*@|mTkc.L4ݴB+];KN'F ͍t1_ 7Q,g,W$7{cHKTN'TKlIj}ۑ)leI,?]*ߌUj/x=]XVe/ ,Z8ӎE4p&4#U} 5q=nf$FΎϏ7^zOF Mz\n"ᕆӣ]y^j 6y;ht^yN,hdDHw"Y 4Gj5AEEKn0; ƴ5HwD@4(fEw)w;  AyCʣ X00E2]+ɌQ y!9 a-1\>I`? ?ִ W8x 0#l r9&(V!c#"zwHwԮ =wrp{ d#;? q3.Ù$R0͊"Ya*+5*54`M\'){2 byE;dV V` $"a :RL݉LµܞLO@ճ_%P0 8+:mqnH %>֓+Vsr-,Hߝ>EQ# E1 ,3 S׻ ٍ+DMo#` ZS+ Lkbj KBNY2uV6OnFpZ>y[)QFD^buar]vT6ZOiԆ DQ'#2RNa4.&b<=|QyrONЩgYX>\o7߽>|_?ρ|װ V`\JH}4 > _o~}hƫ24msՂ2hs mB(=~}z='.]&)?E Vq?>Why?\ŽQU[p׾P! 0~@O5^>X{kbQ&eX<` ``Y9PQC{Nju~l&?F7mq$(9`+X&t8ZKSr錺9/DcyyH Dc([*"HI *#VaQ3G !T+PGUV5ثȎhļr{uX;NWk0`oi1TD4gH܀2ٻ6r$W*iPccu8?88e(MRrk6oxI*"A}HB@%*a@:,Gyǒ%?;DQ !X29W(sP>x6zgL@4Io7 t_ =Lo 3)kSߧTaW8 j]Χ䣣GNbes4edN =K*(Y4D!rLE BtrN5T9.)d"U(q !8B -Erqc ~z=RY x$ )[ibjX$"﵌FMFS^fgK09r8jqJqf3z9lZدߢy`يYפƓfz=M]%0/J3#UB>\ w &cç[{ɱ$QK-9+RfMXmzq"3.xyI;460Ls8O }6a|_|ʥ ѱX}:9x6: Z΢%ٟ;quq|7f%w:$;ɰlPK, KڥTGT:FVJ5,mϳw7QQ B)Z?,#,bR h]byɰǥ&>( ʿ7Ŵ/6av$kbQ4  s0pk(@x.*^Лz󯝇^Pɉ>#J\%q<JJ}pR :JR<#5gWI\N:\%)+ ;'JaWI\}6 =p$\}pR\\%q>JK\9\QSt8{W/xdQ|8[/RV;EU]^Hzsg(c Q岉}U|% :OV) + ?)~wopZЛ}^bx_1zEq.Z<{%xBM{:k 956.@AU7YAi#:O{Wc,6x2YC(VRqXt Kyo~ v&lv]L.^׀.CEJs(F#[>=Ӡ\_j:RjKf(LNe&zG`hYND79Fv2)np0%sH[]F‰uX~,Q>m=;mN`cd NC7E-7fԫ$iYgnM/}qQY=:MvQt3jhX Ƨ+Y Tڸ2P*S;xF I`!f$+$i\NtwAHZ9L ZϤ26"Ź*%61^T*Y)~ ;b񓈕Ǵ .-bDS A.z1U8prHDh>w@>0z>%j,G|w;N eΑqɃYK8%ud(Yif IM *jxT~8!,LI(uaaR"r%U]:(+Ls^EOVBBBmējK*u?'*x=2qTQG.0*锾.DbT磵(}J'rW-&x# D X&HFllF|\%fӌ]PgBaAp%.1-ӳL&I|J{ 4Q\\J@YbBdPD̅D̛WȠg 'H !%D=r6Ic2a6q6aebE%`<D̥""ΌC~ g0RP~Q GEQrinUHFi Eo>o m8XG!1pz ",iP΁%MFb_>ٌ_>j 82lLFAƄT`9,f`Xo;x<[!w7O:U|mv@4٬Z$es](7"qHcCԤjR)"'F!Bxp<;iy:CmT`]pJ"|pEœa9Q=q0 5' $n\?##z R0cr !y(HB4G/UK9B@7qݬ64ՉLjR9-A$ԋHQnˮg7 S@L|) zZ('Cw_9[krRZB(sZm0`!En[!;a|4oT,X.KRzDh)LHۼk5MVqq=(dd\( Mmqi)8]y= p|ƓhғUH`%=@*RE'R۪,~>+_,~Z> *-\):KyNJ,IS 7ugwċ,̺鹲wdvW$VbuJ{=;K!|Oa<~W}j7?ם,ݠ7B53M.k$j*Jo &\-s`4/Ϻ'MԜak頦fpfn^Mަ_ZMT}0=ÚzVUXw/'!rl}w[n7=7Fb-jοbYQP$XiCE.ƔS'# )^#p(Gw]yNN˥t@Rɯ"o7<h8^1#\ Hb)'z`vc+GBcPBJɼw~y@/i`OF#n{cj15J) ](#B/Jc6k0A4,*Y l(E7k,ش8\ r \ʷs:]ى g6j@A[wA0 J0"2gƔs(j49$$V8m0RS-yA)ıH1W?{F俊v|?l vfn705Ñ$[/-VDlHV*ba9݌puNHo)ziI(SXX䧏GUo"oubMlYZ+ =";€$̰xOFn(Ś*}eQYy&@Kqƙsg "NeH /6FΆqCVRGRHE NJrym-D\3 BU1z\uh!!cz#Bi牆 Q9-lH?aHVdh tpV'U(;B2vL2Fg$( &^O>%+v.Dr؝@Z{g}(l?V䧞0h3%Wpnk<9 ȺI ]{1!2+v's0 K s+0b9ĭiMSUf[&1?OO(=:ꃌ4{)V2JGЯ]Ǒm$=8q2 ;PXSwu`s F8BiW{#u" Sw'ؙ5Not ['By Y~Xݻ[ԗsj049ʹ?ti5= Ne]^W7I)?zQg3@7V[ z`-Vf"!-b;CZKr؊Ip6Xjģe1j$h4"`kt4ա=A {#ऩτ# 3 )%!"m('z5Uk(۪)\Sm :N=f@p.nczeT j p$HI%^-`M&iªVx =ٝ7Ƒt]Ro~s "ٝ$sK!HDf[쐳"I7VV}k5zoZ&ΘsPI1>@lh$Ra,SLYv:ƘAi©LN!Os):/U@QcFwWgK+;^]DG"e ,e$fQ'uz7*ILOT){& >E% $"Z(\~=Vhx]HCC FqJjyBADX Hd"Vo- >ECH{[?_0R#c:,FJ@"Bq9ӛ)Oe iYAzJfp\ ynY/vn~w -LtMJS Uq)fwF1Q]οW,/ f7e0u%p/tA0D>D^?2ϾِOe LaJ0a^ %#̟8]tJ0/ߪŋIl'` %Ff?SQ)h6;tMJkKBßd?:K}JwO^L.^/B1};He9vP6fRԿ%Ǹ%iDmK롖uf`YmfYǜ|0,֨a=Av:y~wsJV)^괓uUb%:q*#aoư~N> ?#3,c`R <{YYwP>%_~o޽͟_ޝz8__ vLZ$s;k޼ijڛ64ߤidߤ]dkڽ/>B-ڮ1GϯMw\FH&),Q[_!cFc+$\u6aqS.O%ȣPHzou!CgU8uJِ;K9r`r{ĞG`߹n",v=[<{ qR]}X4e1§~snآ#CEQϊ v^Fa҄:G쐤!D/%`,-Q:Ն;;-ͣCIcARI!{oښ肷6bpo71hi~&0ÿp}C?'_tj1Ky|0MX)/A'C+iSxɤ'kJ#sbFD>ֈ^irNLKk&,x MQxb1Jibb Ѭsp M8a)5rT6 /#gCrm~ѐmpT>3"QZ|}>AFU$ZzA7Obebzuf\mkvͫ^O:BdCZ=ۘȕ*f \קPA4*Ks0^S̚K]z4 $ͫ]5j^_yeJԸs>oA+ob|uMUt;wo+<g95y}~P^Z@91oݖ'A"N?|4M(Bt%`H[ cBDΓHUwS|J96%I##%I4;X0g2Q4Q&"TV&rG 4NQ Tp۬'I@ZXH$.Bb<E2q[.XK˭4$K +鍑P.n.П;n WՐu2^u-H'LW0jVe3?Q'wbC`Kιh25m;q2Jp䵦VDC-})R yQr?Pv<ަC,VՏ2 JpLç%#YFWn-:b2njq`?VQu"RЏ.7rj'q2TWϝ QTnhTA+{e4ۆ'ŪEQ W9 `7}rdžܲU[D_yUqpg{IdtlF&(!p$Pl QϨON1xFz,,_9gYl{qV k\?Q6>y{&C2/4`JӼ +oӐݯ?/Πm6} 7{|\g(Ӏ=d`x\4-e^@#`բ?~ߠTp̋Sg%pz_Frv bO3\3#h٤iN;c\ZKͫMӜn\|kUwen_Q]3N´W= +Ue~Ʒ^;oh~0F.S8i7j9ic7*[(P kɻ Er?<9ԁ&ttIP/W{^N2qgg ٰyb{7Yìb4jN7ͥkoj?w۟v0겾հ?j:}j ],Yuq\!7V׃T4q>us͉_l}(o?n[?nhmhXiƹJipvݨ E(66X8p CH-"uH"R-"uH"R-"uH"R-"Lڴԭ"R-"uH"R-"uH} ZSABN :kuEt 0C vѺ?NAZmIM:H 7]  j?k#%­ږE0kВG)t&9ba9prˉ!rK`RkjcƁ܁h˨Bd-))Dak9n`~*n.֊d2J8ϧMګ5Mt|r Qcov9m٫I(ya ׊i9<1QNfpݯ͝WWɅyl 1J*>#^y3܌\YոD:=b dEOȕ= UݰhVv3,oQCj#G1~z:У>Z앑N_rU+U.:;q*#c禯FR#1͖=QWw&;p>y )57횃ؤk^'I+Hvh~?^OݑO ]7nVC_AQ/|&jHfEhІXAgV[j2cvGw@[ٲ9Ûgn4+}m~jc$C$xbd \zm+C;:h?"f:iADJh x)&eKA N-h:uN->$t9>r*Gf(W@Q F|P=s@WGM"P8ܑ}W[_)#dð܁;ˍ)6$fmVԁZvWZ-tNZbVorrI./t$z]7#q*5h_uͧU)!Ȩ9e%)f۠>m+PW;Yx&ɸE3IM`?1|b59IНgpŬY]O `ZiK3H> PVu IȪa֧;g1>t0|HRWCwn.ű:DYGk?qgRT#Eg<0L$Y_0tO +[#i#I'^ȴ.ga2=eaqd[,[~q6;7U/ Ū7 ?Tcsyy?:s\U\9bc7s?_ɴ/i՛F,|=x1\{le9v)mMG2|mjNmR5A U0s5dRŮ؋V)Qql͙T_X~tEBkm4TX}\:9mB=C)MR+5 jV^dURbE ):&h`P>4g3%-CAQ)b%+@#~!Y]lUs92u?<[)sݓ+/]~4 >Kn淴cF3BVnT;TSqO1sUx`(U ND W]s0Ca>'t>\TئHSQ/ eNFD0L)a\ pFZ6^gPk'Aw6pj8}}QօyF˕kdJgr=H&Kg*)mK%兢/8Ȓiيb2H;"\8'.ji8cÁ˃+(0,㒨! cV`4 I(Cƈ>!|>Zt_ГdxfNycpm0JRHq8$ m ,Ft?rT8!NyTRtD@ R,E#pRh++FKP> !;XH* F<>C8P`s$D5PA'ka1$E,Tn-W K"0JP\żQ|J_KT \Еbal_})Xb,f 5˃A#^hɾ&mhlW;o¥ο?e( u;aD0eՑ;a'WLĉ!aNq(B3rI')s j'jE%:F -ep@MDTP*D,N$8k5)*()FΆZ}HHF |:) FrM.M9@^jsAP&xF51J9u;e3m,Җ ڰ\=*$*eK 2O}D3.9&K#m"~A5h=n֞宇㓭PJoR:kZtuk%օ[fqG׹MNR)k8{u9/YR+]s65D:uz^HQAyy@'}و tQͨ/$P,.24"d 6HIEC"Y6*h1[.,BSXB gw_exf}2z'(yc{^w45TυA&&EdI)bmRie$*%jl< wq&W+vLbH0*9kl79Pv1Rk3RVk3jւ]N*D*,(d3;UlұPw @hϣF]"E^sCb]H2&mma}XaO*b<X?ՈFFl5m 'o8G:@ "9Cb\o~+E[(+ E5JH.hJU׈K-XzADmFUa$6 cJ?tz=G.ƺWK;DC,E L6Lz܊;`-@xI=1C# \ƨYu!'Y*NNk$䦨OE*5n '=^׭!?^ϭ?(e`1.{(#.$cbH,5jfT 〮W)DF؋FȼA*V1m$//ykPsrqr여5$)R!)9V H"H)1,MO8׌!jb&D@xDžTʡ HX!N xxzO='yv*ASw)P+"DQF'C, 8=!iNU-`A˥'OG/aƴ Bx9PP:22|ʰƒh$2Шw&HiX΅>EբAf](QZGFJQ']O <";җ(Cd>3qk UNJKEaNs^Y 2$8ve C y 7m\|8͂Jt.՘Y%8(CL<bpW?p_횆ۻ/3 h&$O* 7*ya<9L|Kt\"RaᅨbT^Tj[|b!q?{yo` gbJz&K0f{~܍=m!9 ֓C3$zw, 5 a6t7lCYԪI2C?(F@@HY(NR>iGwfC;!X>Rؑnӑ: kjW *7/5ZLB}Sd5\$s/Wnj'wcC\벡X\V6en8u:ox*e#u~OZ^JItJJ\,ym4sw*SstqD^kʒxk&ɷ!w5OPtot ~(K\hƗC?~1ܐf}zsy)T٣1lLg/4nqǍc\[0y1)`/e5q LF0\沈JTq=$.?$,G~wf <&mlv \SÒ_jO9pZCF0xfW 7N,6t8XsyPOx8Y*`ě$fѡVu}Xmh/ "=J6ד&zn~v[H̸+@Sp.^&\ =#$!xa iʔf)2ᔔkké(-'0A"Et;=Ù$$8-@ *˝ϰAq -Gh!yV.E}Yla}-U3޵󩹝^m~뢜*.`8yp#ϬID6 4ѵcu8ZHxǡNHc&PfTetfL9`p':yg䬗z3h4͚IWV^pW)dNK;?Yn3 ,CKfY,YS ^+:0W0;8=,m = $*8 Då O=cF@,)iZ+I3ߨn޴UM*{{up_^ןBRv;9ې Fq0L\4?Þ ^, c7-}6?? G H*Fq hӰޔ[nּ'=Bx<B_?{Q%j 3Ow"+4xK{۵OzD",j/!LqM+ S po^ׁzG-l:C)ԦETɼ#ɓzԻEF:zq5*CzQ$#Z{(L b'EE힁׭:Nmukf{ҴHӢJc7MkaQW^Mƿj;PBNDR M-%& IE٪muMG;92<=, #41Bh(1ͬ!VBYI4F`))HG>v 8 Sl[p;es׏tB>/e(<8q84#t1/Q捎)̘*9nIx;R{{;uhrSnyNjy`afA]8ެoO':bko33I%S6N5^rÜ)Q ZcI2iCөfSzSzSzSsdRcT+(ByD>,$2sR`3P RnS1G phLX"Hd,qGkˈ'EcŨ3r+F( RX7a{%h~)A N7D@D/3I}򼉄3OX$L,?$kY<>\xT,ףƌw#Ocq؍\q~nJ_jg_zwRYW2(P{Ib.(g@NO]:٫R g7Կغl0/9[%壋zW'Cbiy5@ Vsl«u+.oοY&EҬB&[3'^@ɛw'iFd{ <[E# ϪY12r &ц  ք0NGc[>&aZKDPE2n7JaOeHo5aMQcbg(1REg)(疎X%.h^Ws|˟4w(~W42}A͆3/ZOI~MrOT$DP&{.c j@0ݷCU-ѱj@-UH%'UjZr#HdGW\ZԡUNp˥=$jjFnVy!Edp"ySR-F؟*7I Kѹ*ţfsSS . ΉRFWD~rތ`%7"W'^? LoF%=0o.=VvDp ~rr3r<j>xTxW/#H#xuW@BX*RKUW/F #H0;rq,p*R ^ \1M9GW`A"hlWZuUW/`ݵ8&\Er:["5\Z]E5>r1fW}wHP'zp#~84s24; .fަf8e֨TAA!+Z7&|zߋ9nLQ ?_;LeMH&.i3UIc=« j4.E4ё.)ÜEPTЮtѕ~XbJsX*m:ɕ !`V}Y apxgjk N:;n)D$EGR!3FrxCjħxƗRI$jgZ&8h/9vžS-9KR 3sKCCgd┌n_+J#HFѐ@/7gʖdTKyC2:rJF( *Q-˱Ɓsl#lemß 3O*^i sL ,{X+/d>බtmImm{Ff%\#8"x#h#'R)l=ғBOpj#rzQ+nDRlWW[/=(Z\a)? !v4p yp>'\]ZzZrŁ ځK3eAW`&/T/po(MX_o=\- #gA (u]c7W0h\s.B0VO_#f ̍q՝+J4Uluu)e'͵AT-h_:y=II1#iVdz{jF[ȪȃTD3tEwo1- Rxp[jyyŜ".kh#cLa[%1P`&!|n;C)/..0)g(%\e ELPdT`'x&aCRJƞ &7_%I)x}KV8+C"ϋ?h>6#A2]dkTf%$pJTT0+,F bs ڪ˰ ym%)QGT ZO$, ƜP$hdoX =!CE'}RH-!&CK9 ?{ƕd O;ءub} ,Ƙ$_fisMI=դ(*vKԞF$u:uj9L AZZj 9Z*\ #JcTFKݫL=SܨڠiCoud~tޭ-5+|BJMU6,`Rym&5!zIKwPE0cRN›A1-.bnt.)KN g=„;CBȥaUV{cv,urS2H w̺(y hwobJ9d|HEb5Fڋ`3<%cNօM,iM)s9j*ҋqZJhH%%Ɋ] seNJ} w:b*ÔxXkQJNiBHH I?aBE1k#*K|S+\.@5#JTm#JAZʁAN:2]:Xi65lD\ ̑FEUWfҙ5#%h~GBY'_& Jۡr{Fj Ƹ'e] %H;7[.ƞ4"/_PH̯&9%R,qPBi5 YV+$1۽@VQ=r+ZQCk>84iȠ xwtrnoa3RTY7IQŘQE!jp0!"M@ 3080/KyJ><]-zDt$Mi/ bxs:p8oͺ6`"+]LAG=(]I4T*#h`!&SQd֣2:/3(Z Tx$ LjyUA|ڄLkptqXF]i(^Bdn wÍ^.z. O&!z*`YeD]bM=A!%xB>h AJ./sXgmG]J@(vo14K r[O κ$!;V@@` *-f1c &$˽e;VE̓Aa9J(Y3#t]qcm\gRHQ](ͮs=(Djw7qVUp*yBS)`OGs}2: X^i?뀞 u@bX: u@bX: u@bX: u@bX: u@bX: u@b뀞?Ǘ(tt@0ר'.u@2Y@fX: u@bX: u@bX: u@bX: u@bX: u@bX: r~J: ܿq: X뀞賥Ƙa ںfQR޻Zgfͻfw 3aϬ>z~?ޭTdK,)u&/@(~/0P1lV5?w\B媾r00C~B}yrMWRi]+$Eٜ^+oZy; ^c?V F s~D?owϼ&yyeLW<6VlvqqW{UUAaD_~yB}b@9j>- 2?esw`.K[@D|b|={zr]? nCn7OHO/ustMV>Ȑl *jVGu OKmz^S?.n,9wVO^g?Jg޴_LN>\] nοm"O+%.$Z4m!^ {M):=}ouaWpOqn0n?WbWaƖ1GM0#]_u\%<;Z?A3W:'k 'Lzwυ/y ~ v}t{jCQ%fa}3\ igu׿x}tO倸8 e >7V3( AA՘e9VSbwiM״}CHgu^9wp1Ԯt|¡z}}\-wr3ݾ:lA8[]._\ߴ| *-hM[ͶrRhlvۘݿ\Vp;~6?ޮVO9_vG7'_ݹ@68m?onG;06o'_//{9xv8@~awh^-w߾||r8/Fv #f?ʤQ Mm\eG6+M97N]'Ez/ C@xݶ52}q>v1GŒkm+7/~,{]nQ"c+s; jd376J93aR2oΊ Co`%ybIhu.`@¹ KA:2o杣p;Mmʽݾכ1ֶ W@|7o\LY䠸p]`\@jN*f #γ$U ^VyVcΥ՘sk5lZM%!!P+NHC. /ӊL9 xRW#g?0z* !RvNJwIO ݉}A8|I| hqg!B AD L>}++?T#%%DiJ5 cEeL˙shy 廻KJЧomzъ"< "bG5\{rު Mhxrߎ݇9ϧ]Of:6F+w?Y3II}8Ho #JCA5EucY׳H %Cs#|WrøGոB ˳f t$`w(G:5ge!{ v+H6yN:aEM՟R%\Џ1!Mv!CVpDȺSGD_g߸ڏ D"X#\|Mh4s;+Qks%%3ڵҹ- Wgݑ 3K9\se `Ԫ\Z{*)6A`P?3I!@)ؙ~ۘk Ćhj@Yx汝x屮lǺwҫx ɤ-ry+:}T\4ȊsqW*댩 *F攵|m&jcBjOAԜ!K`#Ci% K*2S{Lp09qҳ@i@ SI o0wd@ t{]iTXqtzme[U @| ZU TЋh(n\]$T!$)R.BplP| ᅃ݄W( Fa;AU^3bx3C㶞ࣅhNgӺ?oT^\/ ɹ2m@LP$áMfje]kR{/sTn3T*(#JIB WV|J"CY@oT6S^jdbo^PѬ %ՉKYJ{ bx`&BtXBj>3V;pj?yj U[kH y7yL_eb" x~z~ p4kg.D$m2 :W#s!eln[WֻLtl'wf!e5߭wkyjyƓz^k{?W->!{~8H[}uO|MOJGRڬw\nMPdcY9~޿hs+I:=ٚ%F/]jv}8W 9<`!q ׄ\CP'Rr%^~ԊHI?JG!DW)BeN@t&JTNrf$ÙNE&+{&l#(Hl,RY2@r)+x*#B3|M =NA_Xϓkkw՘u 7E sT1v7L0+ĴxBxZNNWnfyJ(!Dt<ꕸm4x_%j)` s,N`%w6;RJ_{@r H^v |~mV[ZjtRsIb'9䑵 4Gϵv6)ɩ+TR"4$idZa>4g3%CT{*YrP:&9x 0s5LqU. 1r{YvR=NM ڤ$ڞ*nnO h:2f j>he,-\YDqT_¼2v6UݛYDx\|2!J!u^ڝ.k3 hSGe8Z P2xLy&s[& T68Ko+AWPj898@hmU 0YQWUۆ (9T"*F㥢nFTRaB7W6Qd9hB"<9bӑPT*)Q%ACBS=df^r)I"i%Y4Ad9>J+ow;ͣIn uNMh5TXƈaZ!H:!H))W%>8\`J+X yH QiTa}q1-i'# Fjp\3 &Gltrq.Hu.:\*9<2c,ld*4E )yw\*nVR%tBUǮ0(yOi^4XJk Zn j́2TLep%!VTx)KU{Le";'Wq_XoWxTP",ȿi @ C4!9͍ pM[:oȾ_W5c=ʠSͦ)gRuZ&'jPc:pIʪY`ڕ hˈU*$*24/cD3.1*/ךP0`)r3*|"z~*'GUެהv\O`JJ5"M8K ΃$ <A/ bL;tQɨ _L#w9<L̕\Q@I uLֳ GQQZr TiXŰJ1YXlg+ maYhZYxRYQ bzqC '0/Z}g * ?Oyf-CekRjU!K#>dWsJu@(T*[$p"#{Y\頄W"jfNcC͕TQ֦tR~8k*HbH繄f-ݹ7n#I+p%&q%FF)z4T!*>fH3e -ڀdjVxk"-(9!0L\U3h!YJn-H*C/kB$^ȌW`8FqYe &\|t꛳H{È]]v̈t`āo|-I[A͍EY+%o5Dځ-' ~PBG)x„ˑgy#jI3Tx7;s;#~80>uv&%wE1/^9Xr D*  LQ$S̠͂$c/>/ v&wC1PXk37 я(1R}\.h* Ri=%OnWIq]_[Pjݴno0D.]19` , 6.r2Ǎ:cK`: X$̌s`n1B0ುw񻄦T\HZg2Lшg"n gCif {f1D(fß@g26giޔj'8Obvgᓟ0p*DfxanX_jc֭#ҷŵ@`_ɦ/>VJidt-&Gab{t‘tP9:GVc2;.tZl贠t7ƫ"tASpaR:QK@lsͪ&zM6!Ƚڐ۴Vm(s8v!MC_L|2lSVޫ^^CեjCb crؖD+Z򡌗oow"im]#V5l ya;_P#>D0]EW]Hy,C]߭ SkBY)eiֲVL{}tC"Q.E2)\a͉xP`$&]0z~/fm Midih RguO8]|t="jQ=j\͖dqPM{/Z;obi#gy[lR}N9¶aR-gVA5"}+11t{O74 U,;÷vD[]k|)BC|ŭnGa3%QKJUR .yn {5NB98gaM|BJhCJ; 6 ~kw)`\IͭDSl`: H3Zs8nH%/g3{n½}ΩoΩqQWS+ 4)U*zyuM~SmWU*'w7a(b^9+_ fzýJm^(U S Ћ"a+ ]B=+ pVYFYO8U#չBe.,EW2!ܻNCJl.[-p Y,/ـ_kT]*~^IU|q4LTs7ޫ3l:w $2v*3=Jr^j0iL>lV/ϒ;OL>~pꐮT}+D4d'HWFHbd B\0NW\o'CWVhbx SB7thwBBtr!Jt`Iȣ[Wqj7mkoPRt+Z@[ک⋯jQBǣ֫B~uR2I>"< ?o~,hќȔʄБXJqBO߹TrBlhkF_tO^4 h)NӈɁ M3=+ ]!\BW}+D)@WO47tp ]!Z4uJ(i 8WfZt(%]=A>b@RR}+Dkؾ|'HWJI `MCWƺBͿ}7ЕV>f_ +B_t|EW?Vxn( ҕєS#B?o/th%wBv7{tekz28cLbZ~J@ _|~ WfchZ%hH077M,#f%UFv̬pc)𩈌:FΕRSש=-MXVt9Q>噠wdp8<b9 \$qQK*6J_/+98:Zs%F/UyWT toKYQ)- NhY6V|P's8L~*PguaV =Rɤ]P,h" b&D&l]~_O{gⷳ&DK.%UJjEQ5Jˠ/)jX鏩pmo-{([wSMl9hB[Jw4@$T*ͷo:Kkh kmd!+>Jyh&CU\Nq &CRL-8W4B cw<- XNW?} Q~ {uFTJ%‹t1;6ܯ;8bK+nAﰕz٠PJije%SHKVWVGai"O'6dfӜIy2ͽfٶa|Yޝ^Vlgjd$`Ü '3ɒR |aA\,ϔ6')ʰt(9 3\0#BZ}+@+wBl"]q6 $+h_ jt( ,vCW%\B;]!J98Oe-CBNWR ])M{DWяWQNWҚ ]i5]!]!\՛@D3X+Eupuh]!J&jJn97zt ySO>Vn(-A@Wr;O&=>JBFԌ9Ԕxu~)nO%.-A2,dygUl Ο@j*#F1;t>p&Gj-IЛ?߼)Ӯ8hG82@sJS5O"JSCW3>?ßU$<:Z)sUjWE(E5VͫV>r 1l*RV溅$nmk;jVdi>]N&Ϛ[igx_g˸uO#vyu¸벮ؖս?jqZ\0 h2& ͝Pgc6e҇,V8ǧ_XOk~gyLCA\vkC]FB}>n8kuKh~t9>:0wlB)R)Pf$l>>oU\ZDj7i%(+2 *pJIf%n%sO#ly\-EF먋\88g$7<:5c+@rO\5Z;+YQ,[VϓbxjUZvH9LRj* uUwp{e]-eb+ 7#on)`V7{W@s)8S7|Ф0FM9&px).s[eOt2g\SĀ+#FG)0S;,yN/6YoN Uxq,X(}`mZV/]%wz.m?=w0]TBKk%:2DpF Bs4My0$h&S;&Ol*ŏcP\x~2Vy$xL[|I@H>irh~5!$fuM.>][o[+yJxp}Mw.Ҵ(P4x&jlI$~KKؔ%+@|Z&gq oH gO_yEn`T77B_=Y*z7taqЂ#4#vIk o $\o֮W7(r+q*bg )mZ-^v5쩬iN}Lk/psVff{0'm_F]W_=yJZ/EGB2yS %ÀjT1INn  /Pg@N.r'GW&Uaᑈp%-k#z͗CY+I~fQeLX N=zزz&t}W~Wȼq,8vEt)ct` X 98eJ-!̫nVeibJbfBL].JKa>37k)J>/{Dv\{~ٌBS\ힻ.F_-X:-Qwۿ{c혈O흲Ap9i9,gR%ʆǠDx=۝iW^oV[v-ΠInPـo{~s$~7c3hϤ4.x٨X$4@6ʳD2} &\jjX <6Gc51ZbT@0EOSȍe>xS&0-Z*0Zs!Q:D6[&lQwaI>rQ5qv^!u>o&X\Lh?x+P}tx hmI>G>VuBȁiOҴ pu54LWy=9`>RXG,[c1c2TNkERc¨:!EG _. $ݬyg(+l &Mٷxߡ?] IgQࣇG߹fVKw6eDRhoDѰ@bIsgN5L!J`<`h@g uN3=eɸĠ*,AS gXl+\d˜;a*j{f曟Cޙz<t84a0es|;Bd:VK׮&ד)^4E4ܣ쎦MӶͰrί-V& Ef tiL^?twCw_nfwoos8l|f Н\[oviݍʝnoquۛgGGü[sOn8i㕧.bj.qL[3ݱd{_||珑gd?+̭Gʸܼ^> ^VoXd(ohL. ׹?1y .؉twj$^#v ~/Y@N)i0'!\ۄXX#m2H7Ih*Iw FG99#:yJqZ0LF$8I/Aw#Ӌ<:]g:Ab_cT)Vn;L]07+J UQxuah(mfYk Z`6!=BDLݠp~'ґD2nkW{/O+ G7:muк{m!Təh] $IVˁ9D %9J ^zɴOIf*f&DAhkGB$S2`MB=$xN%UmWg% ^{mZIU$Ćve ..6Ŗuv bukY?~E} iUV% <3g=}Yz]˚O=}55߻ߖrnm Hwg]}y4Ēb\?ef֜aYtBb_,+cyT-V-ċԺ q? u[[9: RpDG1BWV^Q 9n*JuQK=/RԊ;gA9Xǎ8,|^ӚS:<& bo7\ɂYK"` 1V;:XQ?U_ة3ۙkG tb- 5b;iy޽wyķ/-W;/z}i0kOZغ'SfS)FZve֊e^`5!?|r4BKC\l05JY`|98n&hP_?D)Ú]ȤP[)$ctIbB,˻1o!h ͐vgGg1>iNbDZ9cx^!NIs׋z%,iɱ'sG"sR2mPv}(^k$&/,P4J8U''=Hzh+8G?5W}gJNDzJvL=:RqVR W2L-w&6s?*B? 6R6HI޺F' Up {8ud5ēՈKV#O(YpMdѨl6[$ &zr啃!JFR =I:LAj$ۻ6@S#QcI{dyyFeDiLh@JFfcJ 'tW\C}BuglBv-cIHD=hvd\feg8WdcNJrHj=WEfD 8 ii8eH=&+ګ}- ^h*KsVH K[4לfr%c`ht9}R\9%V1afzBRNRY`hC4B\ UuJW يa諏LD+*L*Ƽ"꾤L "T '2g-Y<+yyL)mWjۘ* ( YPR;#1+ RI/!47f'R6vB6vm'쳶a42I`Pi];8%x?rB L6|>8@8H1aH9( IYa$tzKn֏Zt#&Ϣ U0ҸBpWpq[ 9JXUm2*o4I^w┇UI?a$_||xVθ6Mt-Ab?3x<1+p-Sr_H)H%F!~3-> =[ѓ y8І+/&ėN L\˒ "<q$6`cFn䈾 asi(c̋-{ʙ*YIfs ^;}Y-]} p0Ngvap"=8v۷i/q:gKEQ&ϭ.*%iQ܈rkR,tϾ#j6>[㑟,Ѥ.l~Z9+:PjM/ qtxژNŝB~[Z|Yܭw>Z|ʇbc&7aΧbV:|2uO~Э29zw\+RhA^_Gi٣MZG7)KG'j gjh4n<rg(3^ձR~t} ˃4AД_J__ak+ڡb^q[Cbw~o94lˢTk~v#f767BwռBN"TmK}qO$*ߑMasj鬶U=llm:su!֨:\d+s7v9ﳟ|jtX:^}]=e OI:~KBZEvו/ڜ_x"&zԨq;1[Ƌm+rCjPs0͍[ƕ2\2*i5#stO}V4GC{k?n<,^WCi-J5?d8E+pk:6R[EІn勯4ªM7n'ڠm?Xp~ \7FЌz(6dE{]}l<; [6a{;"jPq5G[K]%mM5^C޸,}Yx&Z=)W`yΪ:jzvrߛ؄v>oY>g<w-w ͟,*WѓgS1|b d{zfi~ӕtev ۜĒ}«-l[*9a~`fpN[m/,7}A-:N`wZwilmv_{as%^KJUR \RT5DP^x(qsV&Ϯ~r)7P.=Uv ;\2/p+)-,T+[2 30\cJ+K9Q((Jɏ9W6b0xVSCSD ckBi(u(fVk ﳓg;r.8XU,Oyy*?tQ{zo ?U\a~߾qj|ZS&_@)A>%?ЗկkrQF^prq򢉵( p @ܻ]=9zX._-+y]-/VOHk6:,Wg/ogjk,fg7Ja 7qWNXcuz]Φ˳_{7kChprWVq*U1JR W XP P. Պq*[PM:\)jmTBDZWq*m]#4cVŴv'$ ʥ<\Z`YJ(!D!PWF >B=}p΋liTooc:KtocxxozyKp^C@xFƺba$csE%7Qfnpxm.?RkK-i烊y60'eGY,{ X/FMۇnkF::TTQ;en%eQL[ZIu}nY+Vd9Y59UerMhiʵ"PK)}iUYZ`|,\`E4BĂ+&*t\JW\pwr)0l^~v1Z#JنKM Y׀RFdvoP"t]ff0{$G .5vj}ܟIavK}y}FX#`Ϻ=pP >GPe+M>T3;6uk&p ψ>#bnONե܂B3Sa+Cd2;H򦄝Ss^Ԝ'HEcIbI)bL;xuCO;z1_ӎu0_5=|Qeh ςiSJ&"fw&J:P WG+6"\`Jʥ<\ZNC -KճJ0 cpW(WXpj WpuF+e<Xpj-~2(9%R&\!ªp(U,Ce*Ҥ/Lj+5Q,"\a aƂ+PێL(47/Lj+# xFW(DeZ:@%6qe2&\`5|eUjpJݚ^!>T7Ў ]v2Q-;L$*yXK쎫 W{5J*%h" ;}zu\"I9"LL2fJT`12HYABο׭0pQQ :yQ:laUF[ZJE0ߙ ,{`/j }iU2--<;" gPtTuЄ#R@6 Zb:H?܀Т%\=ਖ&"\QCL4B,\Z!CT<q%W XEvrWR BYT#ĕRژ@*\\ PQJ)CH #&ŴvW(WGԎjmkWR%\%x&(WXpjM_+m#EY`J}hf1A'f<#m8X$$.[JZ"[EH{<U:1/J~K X~3*KteUn>pq>wq(wcYptb7m`P头P9畵!ѹ*LJyDD}):=Ht$@.JIߧݚn̎}=ɕ~u:.qgWUJr Z}RUV'W±ޢw/ݻw0SdedqɹIUɫA|}|*ӿ\ɗގޤY㢪 r( _oRؔag6?y_xkڲ;?K/KsQZעsӟ:9;\vXDT͞}hI6ZK,,٦?& RPŦ#7ήYe0=C4A0ɵ r iKx3jg>LLj9+PVp4e{Jb^$g.Vbn_= _ʅ,ߍ_`u\Bd4*6t^}#7}2Tw;K6s`C_|tׇ?Y^njqZ+,YQ1X2֫\)ZU=hj$,rI fkΉ൙\\=,qXwRO'ElRo0bBZ9&X@fJETz壢*^,OkМ]iΫ/7 uFHٙ}goF:%mE xfMoK@W!(zTLt:../.0]𥖌^@Mp4Q ziaҏuO#}YX^*- 5|haR%܆e;&BT![,/ .Fզچz~4;}Wno}v} @/`k U cJ#68R$#Z{?\Tkn;@{Y~Q7 b}ZNv8hIodf  z>C]5C[G^[i iay26([fדm `+D( O&f:\=Ef/aA{-x5^rÜQf YcI"|Ze<ƫQըjԳy5I 94@+(Bi>Zl7gs$ðl ;+|~Ř} I8`,ӂ̀))-=à-0dd',WrڠO| ,xҿqec3ȶcS7t>3iS{Ng&$47h|Q8mwFVs]K%/:Eð1]y r^yN,hdDHw"Yr4#5Ě"#sIà fEګPs+ HiZUD2'Tʨ4SI%H8VAyMu'A}dC8ih08-aUC+ɌQӀZ<GƐQF@.\ y$D3ܝL!䐋I0K SKj9<7J)ADeAj k r+d LIq?/LyW;H)"ȟ/{IUM(b;d*RT 49E;FݏRP뚢C;SPnl gE-6 iaاvr-`Kr-tH}(*жx읿c:y)g.>ؔdKг?,+5jv| vVU.4 &M؊0R%0֦yN=M3r*2lڛ?-^][34_y?zշ"`ٴs3s!!Pм&Ʒ4l4V#h,o@}GL,(.:X;'lݵ.TkC[LeEZ Is`Gb'͈_.%;z,ЎBUaJ{᧫)}yOo>o~y VOR̥FIqGp5jjoQ5U쐪iw!6P]jUoҶDί//scp4h8N՟贲:XHA&x1.Fp"J~ޤ37 Q!Jq !ʶ#>NiM/ty҅sJx?  sAsV ~f&8ZKPNr9;J=yY)>_Oq6I6zN$H1 - gk(𵕂cp* 1*rhpἰN#ٷ1}'Z㉵tTNߝ í|Ήo_Z*7n_htھW0g"o貮De]\E˺S+Q?e]X8=YܭWtC88m fڤC ̂;Y8SӒ)E 0c4;"UKB6؈G;zаNIoƹom :m3T6t}%w߇@Y:sojl~?壛(B}ہG1sԘ+HX|/7v -ڳfFv4gHu$HŔx]5iy#<0E6>eN 3C_Tƾ HI6;Hқ.uPmn"Э81d|HƔ W:`ѱ#Z.'CĐH!sR5Bm d܁GK"R .D<~fـK5AN{N $4 d{&Hfs& f熈6$Qߚbr"'!!DTx=t.쪪AV4wfc|WMۙ_SMiKcu^IT .L{2(B9*hH̘cFL9uHP,eg4Ib"T"I \֋RĜGpR82T9;q44qƾXhBIVvuь1!$EVA ?+Glm)A*ΊdMkgᤲ :C:61:A Ln, ʬh`{!EMi 半 lLClLE0`ea"g;b(.mAƸcOYCN^TIZ̡>xFdaPJǘ` 5f(J&2adI WAtyclG?M'\+Ii1.Eø(:\pqs3esd,q͝ yAސR3=ŧŶacܱ/ʆPOa :7WUvfS\2ڒsNMTQ3x_9iRc\Q_iXl]. xD;)< $IVR9v$Pxn :`x>``]挱r* p7"@^QTXUԖ U^o񉑏C@9^_XL i|OZlRK14 \ cfY2uTbd=1e4Uo`XNi #K  O+eF.H*2$I"e1G-a5c ɢN}UB_ mb48ehQR~;԰lH_jċ׍?n')MrI ^,Z"zo'r"Y= J"|65OBiv 9Yp`Nz7i&wƣ*QCG111mшTEd.4INzߧ*NJzz?7 pMWDW+ `8Q>P%3qcPT?fw=ο=1i]4-4Uf[.ա_}ԾɜB]jlܐ&j[ݟQ#1 mQXky8_^=cD39S{<7OfLxNKIKxxqsƥ-qX~WW0k[i`N.]i3zWsym Bɋ:v5_*onA綊$Z-tpI|Go7+5}{U gmôUf.x[&ºM #]޼ ib&2>Utu+79y<([BD@vCKxnq1CinߛBcz4xj7 ̄'&6j7PT˷>&[yӽ.!%_7|bfknX<݄~bY&36ىbݗ 7 G&1F%@NI.] вS|年x&+% ddDS̜;nj(Q2/11burMm&7uwoн W Igo<7\S ؟Acsx;}Rd_ iQ̭ύtC|[9dO0sȢ<~Y3ry i4"r>CdkM6t'tBaxI+Hf,"xޡ:8$.F14qƌR.%.R -h"z)u2i d|hYl:5bc`]!)-fZn=]T~s!b"0PF@*eL^`$3)eⳲf%۪GJB/:]Dy"h&& -sP@* r*U:S$ɑ $#u23o]cPĸgiff]OߐK . uIG/:H3Z&lMEglpN c1fBձz]sN9g$KLjM6g \vhcd4aA9i00L:IGkH HR `Y dD! ɑt9)=>/G_q+^> lE8Y !?K^u,wrJ$or*֥"۷e\-jO7{yr|c&eVq$c8 ђL]30Rrd.dJ xq_ߛFÒS$X"խ@j 7~ʛ::$Qǹ9YcS(QyЎ?/K7%wO*$zxzqy|C0#J 1(Ԋ\{\^M[ͫ]]Zq^*]9l!nSݺ7k^ڮOՍ‡el1F {0llۥ"b/Ӈs7Z 0)d^ۓҫ{X׍X nc*>F ƳC3e%_=sp1nx9Z٫`{^׼afTNJH@HuUuzB{S}̏xo<c6<OmhvĎ珧?|ϏO߿tpF`- GkI&ᯏ#Aݻy.]&ߺ]5M~ƛ~+vkk%@R~Q~0 %z\ôz( R?s1/ DM̯:;UGV&;j(AuL}W|BMU~ߜ. Ƭq%cIcι IDFpR n͒Bq7 M Y Pv$1TGZQ]&*%I("N@iW9!#tzXf8f^·6 \n O@1_b R6W_|z4PZѫ%+]!^ӒG/%u4Ȩ,- (UQ*azgaB`,&h[Pr(H(}Ήf:KUDJ %P(xސulcz̜ l>'byֻW!m,ʆ.No(^dږ<t}ؽ*J잦MO}4oiڢMOX+7vg\^%gwq>,ֻ${{]Iw'Hejޙti>?Tywip j-rĹ]k.uꡊc>>@|Dߏ܆bsk%`ؼB~sX 7b (d2ɮ*(: 4Q{))nO+qA? ;v ([ $4آ֫s -Ҹ-FρOCQ`2:URO1 H2KJ $>J̞-"&OImE6`Q"' q̜B \ *\7|zMVSo,xW헗sRMmBSnܭ(HW_:@5[Rh.w0ɝ>9kJ7 @ZN$;,5ѕe6'pDV3"pD֙`%!2 J(EE 2[cΊ|9mS""HPQ+MѹDbF] 9Z1T2fb%&aBPoLdsj ]L="Qx)˚ fUv_y\,\\y7H ǔA! k {iAZ,T:jRhͺ;闬'pwTU [bW4FS Wf5/WWARi#335.#_0T=*{P6Q+GqhȂ=\mo|`uQ1/,PVعrF-qACJ'T"eP7Ju!D>L-f$y{郩v!y7T J:\RZiVN{Ӫ9Yt.&<"贞~Y,^5j\[d4ygtU0.aG2SM<;nP]8^cefNjq䒕6e]O=~)FVƩ?(O<J߿3'e4zi=gl{g?ϓɖ̵M(a~: u6; [T- _x2P8XT^&PS`I=g02rV_ 7_oM~ÏKt7>L֜KI߶vr}cy/q۲Ou`tWR꾎n!:P]w鳽8Ko棭m~xk3[߆Ebc_u5qy`( ? QHS XKs"˒m䇰ݒ>_߇7{ `?x3NbG>:`ήC=[Yߗ|Hxp5foaTiŲpWoNMTL 6M`6-oC{诣izCݥA.YݦM~TJ+0mT1A-fX4&4*UXI]pRury[ň٧  kxSm _\\~O'6iYdgUf :֍90]Xqw&F5 xvi;9r/J|:F] >7 "PBXcG/t'()d%k:tŤAj墛D[k;Aݬo<OPߢdWu?ɉ+ԶgU\[n밿7+}A#VZ$]D3^ +ϧ9f6S|([oNg}/QTUdIS  %j1Q8pl!<8O˘()gDadI5* 9)٘63g?U]/9*ո޽je+MRȯV|W( ^>d. B֑LݓFH59vbz/FR{*gtL4a)!gߔ@;~j}0 {6e15,ɋJ3s r>+r"kUɎb;&m6Τ`WK" Ѡ)L6P~c]gC] V=1`R_/ˤH`_{0B*1dý@e~w*8;'SE',{P1`UAIX !*407j8ڻq2?S;ڵBNmc[upx|@-QSZt!-¸W,ɳ~r`PRDaW1Q`2ϼJ<:yNe΢ȴT rV</'Rp(e6)&%8@ lӉsٚL j b R,U)yW$&P!H%9ls8 *޸k>*ٕĝIPL.g2Ipt9b4.$Qsx**ڦZ8$-U|̦ F֟R*ʘ58d9'06vfwoi[rH\D+[psX. '罰}^}R/yo%?aST>`wHU0$ì6"VeP>*Q#$3(Rm{[dK+ T =9bdKœJ!',Es|d%٬PZ#c3sXmUaaq,X|czKke[T3Ygo.'GlcӔdf_3ƹཕLYJ&APK!!is)ŀfli؀P4߆셤)P6h5Ylm`9t/<n;Dm%ڢj vg'c9Z)eRQA"IbP:հ5BPr CHhKjc[,UYaL@$bM.x̜xMW[cQ6FD9  mc<{K1 yQïq+ W\`.TRѤQ a8 $$U8C"l2ȑ:*ُ?e~ĸ8[8fX\TqQ 8n%&A,$[$ DFeA8,2)2:X J x \<-YA>w@e , ZM }d 0XH"9f<tdNIi=K<eT;q!I0 GaI"O{RB0뀥 ~G*B$X41",DDꥦTH0<kK2B9kpYhrbIr(Tټ:ssnU#N.g*n}=I{ ͍tgO;oYMxg2ǥ=ׯRUOACj/gfcyg`ZRtQV!boKDYLunDO k-T*|sW%Js~(ܙ\d~!;didaOvalچj`k\5\ 3f4j|1Y+j$^m,)T:UR뻋͗0XTlȺZ}" <0gP>NlmBWm/sKfs#ǭ0'tcC HѢԭ31{rl[:_zn67z߇?,h 3ϗoLm,`▬rg5l]ka,S@;zc>a;n\f>>Ik~r#)6%BZ#2#o=blJa^iGjd?\Tknm5wvG;ڥi=}Hk7{{Lw} QIaH1'urzPVzJǯ.Nyb :/3#X4N'GJYgyށrL \n=?c EŲXzO*XCBFSWjPOscU,F#"$zl'0TRv7v93DfS8G'*A"sUg#3Hq zPM:GeR^G$7g=1FJӽ@z_?-@%_c Ls;Ħ1˿%M` [s %8/SNpw }C0;P[D]rӟ6'h@r#sjHxQ}y-[iLsGgSj2j({[Mڍu@+0!TU1 (K) %&z1X\ȳ-'9kݯ|guu Qq9,0eLS*v;_o"4|>ݺ3Xô,W [rDy9qH=ryv>X"y$Bg)Y~H &Ho9$ÅuTqBeҞTxGDJNS !Q* J3`DI$7F n+C^.vHXC :d(iB-C#cHpHXFhg # .F"؁4#Hc( B01ejs>qIA,HM־5B92N@[sLPQ+$Ak" E=NKҹ /x\0? ?8Nח"{R&s 1hXeJ>PиI??G].;8T*y\c<12n#c@ \FD"r "`oI)C%ycKn ilF2"A?(t`Ŵ@mvz7:'llo:dS }2V)`Hj<[1诘|O1ejkz,}QPUxu |B`̀@_8h!hc|FZF#K%P!@W^a.hn}J`AXJɤv{ltPxZr%PoL|E=TlmrN$H1 - O.V F0ƨșV6va2dƜu6l' hMWBk;%<6ʒΎRwS_tʞW#k:e01)>\\]%PK·\TB.C0ĥ*I4:[gU꽙Bi.t+Pz~#Wk ݑ6/HPoзc0ѸT 8z6 ' љ:34Ս5RWW+RWm T]fG#-S-g L0vV&3g VbŚAa#S" FT~{` A wq@l5oJVjsa2y~Xԑx^3bzñ' lN(aCR'YYIE{P[.ܺ0|Zv;松_f2#H4!q^@7j)p'rl),A {_p>nؾVjbeXb(E#*#\k=qHAY 6mS @hl NrVf%T8 y`aΉK ",s>BeZaDۤV`B% ),QLƌL^ˈiDk45[!-*5eed7}@TM#[Ba|0i035$v7>K7ai$k 3r'4 - :@9Rk-H103`EzKN ]^CSm Bz,ʔmQF(p0UaH:]).DTqj|JF4L BLJBp!(bقFє i0v ]FJ*0q%e-p-BY˴#ƞuN{ڈi‡0A$k"K$A TQ {1ldWgeW]k#%_O͹xzݜ AҬ*ҋqZ֒Y4d] s(r=ʜ%x!u}d #avDעȝ$S&mI!a+4V 3_'ܐ^6}F0K|#eoOI=/]FT,CD ɞ5 RҨ Ku6X:RȮB@'zčZWKͲoGTFJ/3>*`5.Tʑ[7Wl;y9Qs#`BiDM2(j]v_Pdp-C,8քBGW\k:!Pf;k.4Pu x0̦g֣)YnXkY5PVPB ׎]S@P&VqU:!,Q(G1N[5mB7BJFȌ hBwV42 v{Kn2N=+`WVb,AoTz a q2M!@C92X PPDfEQHc7uӝ1(QdFt[sVt6XY7ta0wD )_J&:&|K@ZA`I. p-mj(lDC4GSF Б5£xGd)(`~GjCB Bꕶkv2Z"̖.5=uE}Ju\{FӘy}WBb|M6y-d ``V."2@b0۽@VQ=z+`"Кe$M2yݎ9D~3RTiŬ椁1&bbtD5!bEP}fCʷ7tzDJ0V%SdHW=$V*X( TPzz/34XP茼p΀4D. iՠY Y֐0|@^BB%>y _V 5ȤuՅvtl~t(X:?N5~t}%y E*(:%Z!|1Hܩ`U0>OҴbD>"CO&XVXkp`DPhc ٣.'5H/pC4*Kt](_>qLBP=n;x e@Ez(%v n`@;뒄 X3%@EW@Pl5khbF,X-;ݑl,8~Ⱥ*PEȚR\zڶƮ'd(J+12!~R* qwq`DYUZU@`* a!dE De#@ [Ӊ6kLJZ!:F& h( `f%݂ڠZMUPr h P Q"4y6.a*`V-6mz+q7WSk9uZ?%nI&Qԭw@7ۭIT3 .ޥl% f;7C(٢mðYk Q^yuoѮ `L>ao0CנzR@k8n -ȡmM1Wts\@7"fhP[hNJttŒFu S\%3 4Č.mQ )w!kv="Ö+U|Y}E8ij)ڥOn Y"g0E|AWV0 .LB)E#"Hmr3,ۭ^ a+nn9 FMR1,ʔ'Cw]+}9`45zpb-uj+ɼ4nC${ ~Ee@t]z][x y(4>X{6O10)Li Ҽ3b?|AsbOnݿ+ԭrY> k~3UyLϟj׊m mΛ7-G9g E(tV*zoꥇs\/ FhW?ʏ7E.migPT-ͪvz?WԣS?~Ѷݱy Gj.̉H;EmNȀפ6"VJ 7RRўJڈ( ӘvI4jMn\i廿hKyr* MI65G¦d3x397{= > d+dVth|oꔓEڠ]eӦ)Vm>cԄVcpYTQA,Dv5QL; c/wb;1C~%`FS#VlFȢ+.V6GqzMySI韩\|Z )r12|úM;w_>ko& .nN8'Kmȩ]S_]߮ωQg'y0Uyn)::|j:뻰aț^_?=m>|z ru_ã>!pO{hx8Op# (>?< 9ǓܞȉkjWi#yaC'WMXj]ݡO~̽θo)||G:};qs/TGzlJ8D!g gR]Ow7p%MF1K֖ w JD)_fMQ3_S綦{5g2~M1 ]WNdhmKkKZD%DZ\U֤6tMq~k9|Si=1>o=\źݏsJK[7>@D[j*fCXu<̪g;;yUcO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cO9cOU@^n}=uT7_n=4;{:Ԃu?n&&R%Z¢ ?+o[y7pEMgуM3\Dz+KSW V:EoTBP}g! 4+GX]|Zܾ\Om:zM.e][~m_l?ΰ| 9VB">bo^X{}rLw2_ fRrz{/7xz7mx$]偨\޶-#A'TJ=_..6y]Os~so~[[^}ZQ8{Olb}Kh?#+U? g'Ԏ{sf{FB}J+eiwVci5KYZjV,fi5KYZjV,fi5KYZjV,fi5KYZjV,fi5KYZjV,fi5KYZjV'V[EHZ_jjմQIsV#JVjwpis9e}ZɛX)ӟ(3@Vm|fF*k Le/p} Wgp}F pov Ll Z촅hH Rb^+|IaWZTh=gpwS~x=fQV1k_*7|xo%ϣzmIZ{1, $:9bdOO xX8'=>ޢ`2ofA][~|@ l нzSwӛ^nf/R_QUΝ۴*OB\&|߻N߬cβ8RǠÂ=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,Â=,~/6iэ^~-i2|gkv;Kkuu|}$,7 r:j,7? D[n20r{|kBZN_r OA L)9i"xfonqH6!6fu"O۷b^sTx2a3x\yMAs{詒h?#Mܭ>;Y\گX:[?55qIN{vQ9k*+[%ڔ0:uڀjtq5ǒB++O3Ϡ8af6`vpвw29R?lky3eC˿3)[nҸ<(BNyJr?svC|XgdY&PMQ7/DU5ݓ(,Z@wRw㢴z)Muҫ1Yo4lX8nyC=ogq"Y-]U܄ՇfS] <7S⥧S By`u$WVv%c2椦ǘ)ڵcR"(oMse|r\ΉK3|f Fj5;z$P Di a(_Ee xd͆輢|Z'ƴ,bՓj}Sf&3Fq~r0?/mO瘟i91jk/! a^ԭMׯRq?DžkjOfEOpML]آ*ql4'E0)̬0ާ_Lܵ7;m{s젰8׷Aboz @)R ŪiMj:5m"Ŵ||~2JXV>Ĵ*+~&45oSk<@w{ŧ땚&yݕe?Ⱥ|V\ڭ<9-9>=Rɿ״<bR\ӆ+1+Ĥb?Ê{gZK_6ÇR E`FW2FL "ӂHY qt=*07H+9.>>5$4 @Iqsks?$)|, >"'什gWo/fY(%n΀˺@:ݼ^W:Z 'û'Mjj,;R5W'O'Sʓ>$C^D/tagv49:8 S7,bzW/ =OF җ ާ),.taPpu]7`y)`XWhpq)<_Kן.g5|}ӫ5!P }d+˾/޾-2TDA.=ұd4ReJ=q H8IXnt[X{ϟ{|=n :~:W7ZQYVTwio:_-Wv#}_//Lݿg`3=2 Nr=5;K׿ kyz:g,uwo[&* Pj={[ o_ӚMTƞvj= 7^hɻkoȱQlzЌEnyc5OS40m31V` Җ1r1֫ݑie8R;?cTg(<8qP=$Jr}je蔦#ˌT邶oٝ7VnrxqsCaެo;e$(V[oP,qϴ40)`2X5^rÜ%Q [ZcI"!tg<ŪQeըjԳY5I 94VPy$>,$2@;))#T6o*B9&r6&έR"( -J)'E.wa\Wn+HKڥ{;޸|`~%hX?Ny!Ko>*@9fFQ2B1.5HiVjci#pJJK;5-]&vtY.Uhɯ! "D:(,I_S%[N)F ЗB0뀥 Ăi+ܳ= >!&չ7@]wulp4Cz.N19U'Hhn*,10`PqrwFVs]Kp!`p8_GzaQŸQwl>hWOb)>w|`ٓqaW[:Z) +q/?{SLFuh`rgxayHChESB#0%x_`:QЮtA&j%@{W@.ej_ e\:\o(X ?p%{WZw3EbHHv=ivRE/KЎdڔA>HAFgoKͤ"GmO?ьZ .1dҲK8&1pE$:[/6s&Kh ebY*,3%O-vjl`$6h$SX0ySՊM5vp3n]YN]nbg?F;g=hmrcAдh<ޡ׿*m_24 dK^xb=Ύ5G„`j[Wؘ2N"Ƞ>1Gl&e=fe^dm7gDWz~4ʿ>*ǍU*8u}䲾ү=܋<(qW<2Y׿5AHd2PbN{g$0i\OOhյwf*GMxc؇ty7_ӻUE4],7WYȔGa4OJݛt9: #Y׿=7\>bP1%s_eTW9-<>cQ[\66./~{→_ݿww?~-:XP47OO=AϺ?ߵ]}O}-yC~{B;NWsko@?~ixj˷ӳZN׳rvPQ^ f~zu,]6o!ޅ cM}نҳ#!nr6xXIZuЍsP_9Qb+gDLJuƼ_1!Lh)ѐCO)Ɜ{ҪJ*m.>Fd(6ɔ|VXn"8!cFe=Dm1@ I OBY cNd!' MIUӺfq:n/*ؓg.zsqC.Z|waY͟N Y99dȜu ڣGon7mx?kݍhƝ|6hgGgӲ_Ow4N?FƝ|_ߺoͮ7lyN~~!1sw&s qsoxW1C(XmIwI)Ʈ(R; )12iLU҃So5ȵ.>!؋' M:n;p rmӰO;Hɬ:<􁦣WO3'NF̓9ۻW|7~>M7Ew I'aSxqoz|a}Hs?Mk8vP7E1ފ:e (7TJ.Stg0iF? uFt8]`oI-MeWM:LJ):̆yut jq*p&Xf$ $@49 G"J=`iS43yV( ,lBH3q9!ug/hSDGۜ}Joa͛u{o'N@ˇ_O[=C80R38(1ٴL -B}&-KYMX:]$QYf%רd@6$P6-J)'P%hR령f6+ԵMBu_1W븁)v| 6OjsfWm_bQ6g~e/]%Z [ĿtLe<2SuHz[`=Y8=?S}Kg/ߔFwH+m]7[vı0TҥyoG̼P췶,gLp  $^Y x ѧI^:LAj@ (Fa*,^g_y* P :46v Xќj.\4GwVYhkꋫ5(aֱx 'DV@%:!ώBBKcRg[L%kjҦ I" NE|60w>j/Sd"Dad5* lLk~6V2_yrrl<{ +3 kRLJ T蔩!}sCRR2[h*b ~z`B"*dtCuxzACיBMcۖQG-Q>z-b"VFQlAK@I&hR0RNeӹG̼$W2+]2PQE  %QhMrmCy%ҲP'26)$F@FsSB+m,VAu6"!s_R3N@}M&"2pZZA"UBh~:ƣQ6rGܺ-)H=+2V˔%zQ#VX䠓M=G)u1 }% iU9&dJ)1VApR'Spf>~q$qLkʹ~Q5j_,| ŔlB+A)?v@2ZEF}&J ~~Xa3x?z_TٙmApC k[tOcyWq˟VpXݽQJ]Rg<1McHk5F <^[MB*P&($V Ϛ3M}K=2ϳSy( l NI"P!,G"j'NX@`=!F-4o @/{[ z@fz R0cr !yc9PP:" (Tk,Sq$ 4&Ƥm֪q/jQ#M(IZj#iˮkW7ِ' =|w&Ϝƭ"2"4PH"! AYiI:Ϊ(sq~ !li|s(nSt5 pȤs? UxYzu;K߿S~qpgR prpZN-Ӌ`8LNEՍ, [efSϫ[ :Dg1B/x|y;&ToUe)qRs70~ Y8xЫNPo<(K$dH8^P!n[wz)_LCSMn$U֤?1T@]t~1=5g||4—/~?aqFπ}nw~jx!: ^e]خr߻^^4[>wUD_h-+RD", ;`V8YBw::{umK^AC~ &?gxH\p4 ܠkt~!\2vCl61Q0:\|H7go/>{&PQ:HǜѨreɝI~MoV] [>CK/jDPRލi-ș?ׅVQ.! F*6T^,$&1 Rz{P1oXȲ~“2E uN8MN;2`כ(d#;d?MYnAAzSl}t%o?qqZ/_4 -UP]/dn1rOݽgLaƠ^Be.:>2V3|:W>.m@OQCk 89xF;ŹI騝FK=֓1[R0i@(!dE[GfH/@qLrNq1z P|P=C)~Ԟ>Yj`YYT3û{Z,)Gsm:oϯj5y,2:<u+jY/9dX[N]6>a[;9h<6 t*p* IUp;o O=cF@ATk%i*^w\v#>M{`y;ПC?@4E~9Ba:˲߆F.{o>o ,a_UU;6۱;XUmگJn5_ڇw})~~~3\g *5ݽٽP֟[U@FJzk]7>nr 0ɿxM\0]rJa6>i%DcUzww Ϳu:^:E/`k U cJ#68R$#Y@ ?\Tknl5 g5FGu-Lגɶմi $f{Ӵtҫѷ?lVQ,r`hn)1H4Um^Sk(udFQ#֎rAhi% yJ3a2(vǴSQr8b۸9Av|Ur[So_ͷ' LRGE@ך91DRC N^a1*79fޞNyUb)I`yDh2Xib7K־9}hoYw8IgK X5^rÜ9Q [cI"!tkbըjS[5ɬ$RcT+(Bi:h!U=WOx9%&Hhn*,10`PqrwFVs]Kf=0]mfh; ]mRۀXKW[wdJyO'^ݞM4$} u٠tM{0s)]j^STbo Y8s.`9OF^ܠ^_/{t^W'C\: aPu *%.\g]mUDLjzHo^IOH tärΕ6. mA7J֝ݹKwn/N^r# $g\Kr1(RuҪ̯p_u 0+LٚץAr2;v ѠLy ;<f@:ILdXoo]o̕~}]0׈YZHp,-$HK /piL)O1]% ]%R;]%Zt銂ʊ UU+UB^J(5kK UH ]%;]%t銃4S+:Jp:JhtJK+!1*>! ѮZrtP|t%%c]`INgg*z2;VqwR^ ])+L#׮D1J+9!`{pDcᖮ ]5!)G[t2Ra9aHZ (_0E4\z`вf(|-MoȦ'AfSc^k5}{NC΢9 ro$HY.O srǏ.?[;@A*l4Hf!a|KMAS^c*5(bSb^jF5,+Y37CR 3BNi0+- STQ8XiW^!)lF(\gZ΢Fކ :`MUKd4&jcJ:.[U10tc1' ASqYt1Z"K5 P4ǂ<7=KYϏUZ^pΛZNJ9oY js!?|] [ j`ΑqɃÀ :t[Q4QA IM  <* 9N|0a!rroD5,Y-*yU#PKA{ݾ),ye o{qx|VYSLq UM^;3:_n$j}kwWUފ'%Kx0 [r'~{}zuGF#nq*0JȱHEJhTGg %FQ(?E)ΩPG POpʁ-&x# Vum"T5EUFƶ\Br^p(m2MrH"ױ_M7'Ϝ08t<25Υ4ZD/*+A֥c"BJ B@J"6TȠgED3"="qL8aהw[`y$YEA(VdP9{Et 6QzHL8cu4(&LR#1s)Iψ؛8YWmy\(IϸHH;y"bAsls Hl"Όzn$!{\|\9lS'RY{=,\#B "Gmb`:t.P*SZgS{DxDHc{`DJB%( R),FIdkL*`nDM)RRA9)Inw$̑7q6Ca#jt8tbx?&6C"C)۲Bw''n ݬnҝVrTG$5p"(0+-IYEb} NO!{;out5 5I'~(5xٰzYJkK?C-W\Ԃ׹>D|,B'qfVɩ%e 2} 2V7^Ս2FF'Sl F컦ײ/4 |OȒO./n,i9jeNP/W“'t(&.O\$è6{?_;`t` +!<8|^%UTy?qIi#D]1CKrZf";χ>{"PQ:HǜѨreɝ؃p4^'ZQyܷ:XwzOix|>P&ZNٜ^O4V Z\id7ui5B+)P!ILO"0.hn/R\>Dz?c]UeEz%'%ЊYr69q͘]95Q.o(pvH>MY2ƌiִ|ξ/n ]yI_)1NxHK7Ŋ:y'mx2| i7,k0ttPSjphbQSަ47La꓉Okj;~b}}v]iz9xB JV0CicƱZAC.F0 NFR.P Jo>-m@OQOk=8g+f$ vI,DZOR .lei1EB(!dE^.0wvoc WaVvR 0$d,ym+,:@6 }p:f$m0o+.[Op?u'/=[0?RaCxA䳯ѫ+o3ݿV /1-;;ꕊyE ^OȶX;<`4fb{(m\d WZzlwMj\qN2B?=Ą1b=ŔK[GSl!beP+bB-ߦs`m`cm:iCa ZGK3f4j|HV NVFii?E̋YO&Q*gfP.1>TA]6tr}I׫ͻ(?c}z爻;{c%VI$wIp89yhA~~~?] %*ٽ@o?-ӧL Hz~hN ⡛.Yp9i&zG-L:Rjs"DyZ4;˜ӻ/]Ho={s=5*Cz ^ 1[,} *'za~dXD̼VL{DLR$ԃ1+/pmd+D(vY04\`D$| =ٺZ*P6 +jbEEZ;u ^f;PVVbV #)Gxsg-O&h`VO{Dԇ+S(|GI8h,ׂFFYAYbgEJ )Ba’r4R"lA^,U cަu}ꘇHKSfP<9PlcW]ggMtq8}x ):ABs#VaZ3Z1R:`ZhxϩI 4BkM̨3hAQ1,%! j@Cg Mg}z<)YtyÇxιWu2nboY>e))g[!mgIkGZ5F֧tBu\u\ٛ仨V9W$ #''i%;G",Z#5Ě"ݠs@I&56ב HiZSD2'Tfi%1"AyMʝX (M~]LʪZYo3=Pː"ꀼ腋!O2SLh6'#,) rl2\Rc5QJie JDĸ$jH=+6r+d 2>ߕ* | "~pV;!7p,UКJwueâ ۣM?|}o_|'Gy5:ΰw@-@CM7Q_MC{mT4M|oӮ;ڽ>\][C[+4e*u_.RvҦ V0ׅ|8fFNJ*[p>o;"[6-:i_of>)J`0Jr+޵6r+"zx"b8grHmDz$[x~[KxZ;`XMټ|YNLX5[?DɢFGh圑AON㰜X3|]Ål(;A(T2bz.SFeUԯHJ(d/F' Nv.*UCg}Mh.S5{[w.ܫiuꡫ3\0LJ f|l1׳̤҅XܰHE镏Ϋ|$qX7F㔤!j&H?ӈ$D *J?ُ**X[BzknZQ-$#T0ܗ3 ]{tt.tDżP`cMjj%5T֋KO U O L %{<~j֏>kulI#2* B:@c؎$gz93s=Y[\*V81^vtJ2d1'`迮lc lDv{jxOMzΏ=#>#ҿ@ђ?GdI]Ӡ(L?- 7?zdPt}C89 %8V6Y yO'WuZ_yTW-xaLBrk ^l% DR> NCDkG/bZc Bk+B,D Ei;ǝs|n:_s8evUk_o\?՞޳j^ͦ 잪U'θ=u.lNӦt@qARlJ3*<-wWO9tUfOmݚMTQI譇WyHJUݾ/|Ha./[?SBmUY^{n٫|eCݿ[R,N_ʜrwsp Ј1~gL; R+a: \e9V +rϙKm"L)}j#t/K#oj#7=I4 ql $h3"ُ@/_Ao#+傄k*Fgn+W|6LrJ*aj'05솩  aiB?zAbQK.*E5ZE,HK&CFJ)=:ޠAyq{e :VаuʡJ2=BTIc({V*G ъW ./Y om6 1Qu\{IР0嬜*1*t%"I,PbMZx dvXxYApJHJZ(Ɯ,CT:3g ^}\G,ןZZ={-)Ϭ]>{v79KC]-=w=wDc,r_8?}Le)'$#R0oR65&g?x&pN2v2 +3~v9uo f72y?tyקs:&. \2%zq(*f!LM (>$l)X7QM4i`†PU`x%%' pNJT%qxwfg}q\J[.յw;zٚ]9\ǦWL2pn;J!CK#M 2W^RȨ5.= YiSSȨ@ v:ҟ7[r֚ :. jo@00x`9+4E"ѡ>Akz>{ڙ1CQ0ZmVYkU Yg1]Je*{F*:ADפ~D:yUaA:۸~)b f[w:`yZ ~+qƾ Si#-pj,#ֽǂwiWu\Uz-ݮzuȑw&qHLHKѐK[Cla1ZѼ͈{JKGlrrwvw'sB#x !)|)I)x룫,Aჶ`s(yK'/4'>?H6XBU^1/O1w|f ^` F`ADɪR1t:Z6chYUyH|,Og#wC, )cN$SP?78(gRL7_KçiMRj^+es΢tΨKx{Adgy'A3zwttqsFcq<^`tM g25 <]OZҸH9ϹOGZHBDS29lQO 0Ϸ Fu/\l0ҨcGdj:Ht?}հ1?_gS\6˫(7NKӺ3Jx"uSKZ˫ vV)LxWMW8 lqm-GԳJ Q@ `qu=W4+ƽ,>\[=7_[K({5뇁 8|#9[V |g?AO8q.J~Hg-_Bb3E`U "REJ6q#;l_L#a+A]a.hn}J`AXɤv{gXkI@(iZ;7f]t0&u"A2iKBZ( |m`` a9\h:ua2ޒ}Wfa^CcX}B䰱`'Z ?1vD&'BSE57V"FMLfFV;||I8;8mK(ثք'F#=%9L0'aKy$"kY"Mam'yIxi[tm~]rtL+9b)mURDg*U橛&[atҫ$aW\1 OiJfH?+?閃pF\zjvu_ p;r)}e[ܙ*p;*̀0()eiX@g)grqVyT6:2VLBFXTA2 o=Zb_$ϧlj;{NT\㲪N?Yk6,7h0d_fUiu2)K3bpYf 1;|RtmcƯ#p4X~6/WGþ\20,Z.2rcyΰǹ&tj1ߝv"ѻպSt7)InTld)tDJqo83*Po1:MhP[s39M8+ћ?~_/,VKP\W*yrw53CJAF # ,`F8B1"֔k]uq}¼XʹR^],ԙQ})ȚimrzvKv?[m vrsʦӐd΂2FrW65U١ljͦ*S35(yBC bN&4Ka=:ѨP4< l:-yA)ıH1WOpZjcRgjT&5F!"DFR1ǰkf% SM4Vri&)"9sZs/ 쎤x$ir*rɽWZY!ңqiW%ųŠ# .uXP%6mF<HјI&reJSA滤rc\j]HdhOrTENIgh,eP.@q`,2X"Qr'iY{<}Dۚ^nv %-6g36d3Q bQ*O3Pf9,`Bc#_yպ ۹B`;INį0m" _Z ĈA.!W.Iwl[j+ {;rA?gmrnS!))/Izrk{Ljͅ5Q瑁q.Z`PYbBd]r1RzoPQĦFӐ= JlR!`V0/2mfZDanm_01w Q{D{MPڭ{%[.Zc13J`918":B K0T!륐Y 6@g ,d 3  D9cHq>r\R1 acҨH3laD-"x[r&kJVG%vIFQ vB2JP9ED0B 3ZKFbS, A94a#59qWmy-\la\$-.xu$!DSE1PN_9 P83j빅; iqq+xwla< lhy_$ksX\;5{~JUO'o"e3!sfL)XnA̽ z|Aѷ/$у,;` AIIM )NJ@3K(VNG+#QMRV')+ &7GgNj4~Nf]}PfAg<#WE.(!coX:t.P*N8T[A+RDJB%TR0-J,FIdkLiVUx#fEPOczQM录w\MqEКz"XDtJ}cR6?kոЗz^X!?P2)EM[v=@1 lH_jē7]q>7VrTG$uq$gUY8 B$[9ottC^T;™~ꈠwKڎSqHހ\]z}p.K-8* HVoBQvR )nȒQf&>%~Ctvy:qlMPYRDH}sσ~a8+Yč`\BUԁeiڟd8#'*٬D W)r"}܏)$;kG/N.Eg 雔491_K+3,T}@bwOF2㎁,ĔO/\&:nMn")jV<\?u ް'Pag pxYǶuAGpްd8*&+iampy I jfh#}>g:}=J/n 5(N*H">EA?oeln.tAPQ:HǜѨreɝ ^=JZ>aq·VX|\gubG+DPV8ތ R\JL/ՐDK~L2#V`ʣD$f gKu{^9{f0 22Up ΛsVcZm\Qܢe1/bgHςTge(˱Uq_'o+Ff|sQ  XuM坦xq' W|:Mw⊉"xejt1 jcuxMޓ`8}ZKy}I0ļ3_Ox0DU'7W 43.n;n,l$"hv6yD5xFJV0CicƱ:\`2L(]L+>*(%u7>ޭM@rLQCv^zŌstN#ОHI*L5M4H%̻}H %tC>pWi7=Gozc.>OjRYj0d~޿Vf$(5b(YXσJt`/^i%WͫpRNoE0b.@ ?Iδ(܎\p~߼|v;j9.㭷R %eQ,o$g}1/ڽNu|xI}.lBR^@ALжLB[ϟ+^}oxBIO˓=5)aݘ?W'{]]*}Py;u˛2ilًfUN d:+YS1YދuO3촘C py?n5GOu]qrR  .J;)FZRØR2ktdea/ޟP۲j_ TzBDBrΕ6. mTB5kAIA0 A|PƲ#w]}E{_]koƒ+`?l`";ɗ.d3FI"}(m&%ddݧOWU7FTD?l2&TmۃPێ'nfPk#cgVm5lñ.̡jVC|ގjv)!U3kjWtLW* ]5CiJ5+5]X`#>)=Zy,IP%ohgVD?Go%k"KK]⤗>=닿d Cel3KME^rH.AAXݖe(',rw=<\PF6otP:x߾$'pVeyk.RZJݘ{Pث,o귳TV25˪1)fZ4Iy Nj)椷L8yV'39vY\gv/VTƘ=k^?5THXD{sΔػؕ%q>C[+.v7FUVMBo 39|J>:>Z.6_YE b))2;lBhٿ}T*sC j`B PB VҾcha&!"P 2wB\t5@B+,X0tpe0ս+DiHW+zHXI ]!\C+DkH Qqyt%/mJW[tpu0th;]!JkF ])M&Ѳ[W"]ixu P o:Fcjte҆DWذp++H(theC툲2ct5 3U8vm(th_ PV*t5^O6L^n-:)#S'(AUrbT8~(4.uKӍ2»f(UVDu#MpFD@te8tp mW+P0pI{kq ]!ܮ5iwBt!7Z0]!`+kq+DHW+a_y?Թt_a-* "Zԣu5DVP+d8tpU0 5t(9#] UB "\aB+Dd Qj5J[kxH2B#] 0f唿٩S2~.?]THȒX"q&g#ސD;\tuU4#N0[_E[_OvLIe ܔKAe@4n84h;M#ʾ%tiUhqmHHV%X\ ]!ڮ7CHW+.1$ T` j ]!Z+NW}Q#] `\ k)+ket%'t5@Bhu .t(ո0DRBk+Y0tp=]JCPJK+,¡++f}+DY9;iЕL]`3pm0th } Q^DWV9ξ+KE0thu72 JcFz+t%IdMuD~p;Π-&vW]tyv^sÂI-T0DYO #acd4eI^#Z#2`.Z3 xKwfcNVq0؝Hֈ_ҘPP)W7c_6*I?פ!` - \B - ZZ@֌2 k ]!\#C+@K;'%#] L [ ]\N+DD Qr=JhA :%#] VЀ   ]!\C+Dz "J=ZWC+M5lLJ ]!ZMNWҎ!ҕ6]!`upm0tSwBc}te`"!ڍL` jC+Dk{ʎt5@F f+u.pM0#J:Ʈ]њ]O'U9w ZR5bb%RztwlиmݱҎVd=K4YԮZc+j,BBBW Q,?HWCWLX8v=A:OsM7b{Mo.}}K؍t9iC0Zo/ݾ}n o=3 KRxxӈUSh Ádho}{3?1 ?yYύ[j>Ǐaiq*+?|.4E1 g.V0 y+z?7%K f@c z}Q_V ZdywaP]cׇzlTH|PVo8J?Ꮯ0MҊdZl͝@_mޏߗ%˫x1Tx%aN)kS.YjD5$EXZsMFrZ⼺!Zǭn~.{ǙX7񝘐z/1@nsf>'DaTɤE8rPP)|r;A]H1{Ms3箢0er׿5j0KwN#q»j_[|FPBU(_w;]Uk[_\])=F2.jC@wny/wbz)yc).jYQJK]?Y?^٣nLo;_$>S0JZI}ys4u.2gګD8sT:=Vg\. VcվCl$_}|W99 *~G;w3-R4Z6T UO:)@ca{K1~ZPz}LOS [JVXzsIz)ﰈK d~݁ QCl"LLѩtFkb7pfڳd F,fsyقWiDjǓp$3Q)IsZ^ymƼWsCTX0e:,a*M\9h`L9IH2ΥKu&)kksAU.r%LIiB1&Yh4Ceè58FYO9B:*ғ+̛glKwe;+AA̠p淳W1\ZOmS';]]!S+,[kbө)JTh$:^ޑzg ؏9j!2)%IB眚$.癶\0*pOMF2Ķl!y 0K>7˫-b3ѳWuv}y{MSO?o62eөC5|dJJ (1)OM4\#8!$@Ruy.V8'$7)3wTkdAĵ<[>hgq\,;6':?T>C-Pe& /nl˾WbMnsw=_qPz e&S_1zy[ h{)64q{oYM/)9ΏH4Y<T*QPWCet;/VѯGS8]mm/XinE=WWBKk|Nj QτpzLdt'3#,Jrkyv_=9rjm\WeוqƇ$WjJ,|C3z&Qtmh݇ʱ/jW|Rp8ϟ|Ln3ma 28ϡ/4 {T銋Qg8VUc=nM1Ě"G`*:\p^$i{X yH&"9̰4N+!ʓH&?s}> Yjt[U=24Pː"cF@/\ y$Dn?/GO55Hc( B1bpI3g&EQƠDDK:EjYC .('c"Z! V(l0H8ߴy%7$C6K׵̏0øpf烡$|VJYe%)0 `Izw %e(&ƅʘkտww5K~-Xl"vU+Va'S$V; (EhΤSwd"\Q1#xy:-~*a0P膴\SaBVJq5t4¨Y8]ΏcL6 pMLiH^}Drr~ FIIƑ۩4ӣqM0Y=4\qSHWP'OTΪiy ,~*o|3Nat)̥s0yշTvm|V.Ѳsa!Вм%7t ilF2`y h\(ԋi&׋.Vm.-'4[%h{ljYLeEZ I s>ߏ.U=/~,#;v,UЖJ_vs{w'߾.}^mF|_'Yo6L"8#O^޴Fm5 -4mߥ]#7{W}Z[:[K4{=vPd|9I[GlY͚Jz3 j~L*J-g|ܥ/M 0 д]څjصZ1H7'IdǾ($p~p%9Ljǹ)0!qDIIO0yy=;E'86: 4PB!|`-qm` a9\uZ)l>ym$|ZCkcZqtgvS伬ATr#C' XQwC5ZO/\r^.l JK\r- ƙARL:ty%F.)tN{bCHB.# 73s?N mD-חB)t3!VpމOyءgBG ~T/$[ ڠ3S]@6Ҝ!)rWDڠg^:И L0di(b,Ӝ+ 9geSF4n1і}B7,ۂ <yju`XW. 3ALAUndnW??'3x0‰EcrMRN  ].zyP#;Z2P%\ #R$v;#*!D!e!x0MhZ D佖豉hj4BZ"Z#_Blz]3An#3M'')X$7P2b.$,iYxǓYY] IvK{; Ub.BOLB۫N^Ā]LyJP|T:JosTS=*v{oBJhݽm{U+PZ!Oooqw;}n̻([zOu[iHn0Cs262O_}_Y2KJIqmJW4uL+  )+#O0O8汋''w9=9}'݂BHBÂ71>]5 S(*J5LLj +7‭CVADPƽuH ÌМ-;魉Cˇ9˳.=-`'_ 4t O: LaMQZ>^U0:^_ּcT}>Q%^ [ ˤ!@91}P2Yz@daYɍ@q^RFP)hԘ[aFSev0m KéLAwofd;OC%`h CRA(gGI4 6#5qW%Oר Pk2Ύx=a+etp!S.ZujYJu7]wrn{,ԗ-xx8b\i[Rr:^{ؓNtS[N~S:=Yڛwݪ%:Nǭ7H;#rrծ8Z˄2a^!XQZSE57V"FMLu[m͖3>LP3=l\ra$FxԤ~~5'?\c~n< $l 7מ|r|\ye]1KO'0{1(Q|'.eOؿo~qҮŎ؋ 8)fdQ,zVN>7Yv/*^_pp!}A 3IpoEoՄSu[po6n\R匙U;oߋ/jq7Rwf騨oN~!-U>,Π퐴+Q1ϧKQ^ RLzfYVcױz9L0gɰpKy$"kY"M:̗*V77v %aߛ x_w,uTy2"8&OE,LۀNZ Ι1!lrˢϽ'+"Q#}2Eg[YARf ZD7!$12שT m0b hpdTw}KI 82Џn'0'E'>ZN5S8c2fK6 ~1Dq +h."*uN8TgbX0{.  3-WG0IF襊XcI4GHh&ִ~~HJDKe -tvWpH`>>  .b.dKx #.t Q͸\D0UI:Y8bEPVZL,\@O!t3歽ӛUibq̰jenýD xեG: 0#Y-l<$RQ}#|gfߋ7^˛eef׽# g~a4_dQt{PY1֔XgY/#GD*eY@8Z tA)"/ӧJh$YAXvWӁ%s A<л4ŸK|fR^Ogュ|tBwc}zucqMbQ1r0/@)I }D8X#E/te xڇ5<q:!I>]26?=a tro :m~@ lcz-2jռ5z=. nDrp1KK)Is9j,>m>Кc%Z9Kbj2# $|-UKOԗNÝuDZK)kV )˵#17r.Edf( OR!~>#ti$4&F/ X%q(Q/4g4o)F51d cio%[4P;U}"F̊:gpK=Wi ~{(I+ɱ' y 0Xo.ߧ\@&W훜~OZ%\mE{"=׏⬬|婎lD;W؛2v%Khs6yϖ՞3+h,-Pc53V q68w^t6_e6.M283 `HOd=/ܾy\_MOe:?O>$)qlwy/b 4\W!)uAdAhu4`/Uk\A.H WJGlg4v C/ ̌=;_7BCa٤cOֶf^ֶ'}@Ƶ6DIjc.[JPM%cr m^CR!YĈ˪͔`8Bʵjp$p68*x08|ˈ43#҉OVHCIH9v Lឣ:q :+#F%c0bQAUa䚋r - ,. q̝9qU)eն!u&%򢝙OLBK?5tʆi)R/FrS9swZ{C٤c_>t3ۏn|:|: rFzC) e?HBM1d·)b:V`UJu).7<[goڿIWfuk| ܣO#e{\k?Cqn{> %v X! xb‚. =[~̾gϾ$Ua0ܐ4aniE1 ~kÌ]OnZږ2ŵ1Ůa[u7/ßo" X0s՚|~a.f]ZzA{ !YqͮVwbj{!8LGDW,\玅Z&DWO8WGDWbX9p?h9:] ҕ?"RGCW.@¡@)DWO`9`pգ6ʡPzNt+{ǥ_"ս/LWe~h~JEWteOt Y-*WC$߼_cnKUwjoq)2|Xy[蛶ۡlN%: W}>3,a 3 ..u*rSq //_hk? ˻}4y_xF*so=Xua?9<1oźb!CKk"̕{zpG0BuG0~ִm- uN[ Opkg28p4t5~, ʒ#'zt嬑`XxpxW?xj DWO"V#+o@BW<] 7dDWO9+VxKJ9H|/ߺW{6-Zvv~>F &KMyoK(Sݟod~P%]7 ~{1/U} vΗW7;Kp1jdʬ2={ׯש]VgS|ǟϝBOW7ߧ'mUEիm/֣YCM"#QVFޡļo ->_xjvc?79rT {:9dYu5y6SȑGF*BTW݆6ż5v^gW0__KY0P˳W)؞i,&䝭n4ˣwL>ZT}TZDSfLYSH8gR5.뭗s:n]L X;ҩ&o|| 9Һ6R"a×$jcN hy4oaN2z(6HUY)VJ.ׂňb5J'+St)$0S.FrWdwKތ2yI&}#N͘#t ;mjPf,c6.f%D̍^}:.i%dT @E ̑g}}Չ!SC`_c MB:MrX*0eȷ@KBCKw^bFb/o>CP 24G=Ie]>O?d-WH@1C3X2̱sHaI-xwޜ@C is5ԺxO9`JJ6E5=\5TT 6}o)|X"M1HI-4Aaa-~\[ w ҋISh#"Ko%9 X &EDmMu,/ go6룐 MI#6.WS6UR5݄f$E4POIakJ%K$X0̒naB,F #)Lv;a5I`/3£*)G7rN,3ؽZxQ|  |4Jer.6qY@rܪY!:S 0Q DZ&6tCS`+uV6]G/0JllFuPm $hQ/z4:QzLʜ`; K J J.@6*  „W' ŝLqN +*!\J]i[2*\l1KN@ףw-VxBp݁?2Um q+gqg0a^A iNePd‹Y@3=0bM ;Z͈[sƒ : :  * c468p.A En,aIA0yVa4qQ2)#\VZC{sՂ0ә1(a@Pse yDgY6Rlp;:rF@*Ut2LIΙi֐)) E>RCzsм4G3*1\H/gNA4Pt=!sMp `e 2$*] ̰Vl|(ZښhBę[Ġ\ ,fȥ&cEMLԉ8!BO`O6@vl7xaY\]w(8ncW]ZzY]`.JmF4 o x :xiU"y&]ɤ.rt%59Z#pːT<`lLg:C.B`W|# Lشʰ@>tl5̰tdZ{:_!~hiE{m&8H]÷cSփxCYgf5{oΚ*1:.zOA `}Q)[ZcMq+f-x6Q#YSQ?f} DUfE d9%$#a K? VxƟ ?nl:nP< ں!5"Zqm@܃0 Lc#0!6D?Q$ 1Qd ./룫FYuE^mdH!?,#mh(S`Ŏf^X sWDk.gB.dsD{jA14F,0ZnkT9^p4iEđ98k5vil#`:G% 39(kh?ƒeC #RFdo6uG Ü>F<v Se:|*iDk6֜߈@UHڳb,F%5vs yHm@*gU; x/zdVփ`ZznAH@Hd_kݤ*k_ \nS@`a sy[O:G]y:ڷ[~M;M'e˹e&A!P]\骅 $bd*ltn1 v[Pv`)jLu<.֐D5s5$cʪ"FCo G2hhG 3| gZ [;G X1L"\x>W';)E^gdG=zMfҥ&,2 8c$ 󫎗< Z6GِCL\ZKo`ebJDm_7!#SR,yISfIHuiY~վ]w*;cg8B5?®zeЯ&u[j)rmAH[NE`Smn~?}+ȩ9宴Pޘ$a̻z&YZzVV]JGJ[d|_S|z, iZPmio nخ}~ 7[-7/ACp}jVӄ[#+S۲JNVs!ҍr-nζoVӷf6m ZX,s.>M[i0^|ټ{Wfz-p^3?iBv5>g[oo~Swa<3.u,~jiLvUwuBP?=\ԆY#cܪ*'olCig)ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"[!6Or+I?G7RFn{BS[$zr+HV$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[KʭRInȭWȭV[U$zr+hZ܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[}A\>ɭȭZ?rV'/JNrg)B),ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"ɭHnEr+[܊V$"V3VظިÜ2rȔ,40yǡg y_fR;x-T[r׾˂/fLzB LC&^~3M" Θ@3fԐ١jX2>6H X'5ii~hBM;1MK=[#`ToQ誡骡 -] /]k?U+z]5R:]5J]=Cވzcv˾JNJ!])=ЕjKE_誡'_j(%ծ#]Ƭs=+v\BW QNW DWχ,k{DW pn:]5]=Cj>ծ`zCWK/tړ1gHW9߫Rgۧ ]57֜|t]yJq~=9] ~p'CP+uRDWzܕr>[`&9N/mjk~3tJ01`:߾I3/0,f4ruM:@ֿ KƷo&`vtu*MulkSC}WBxTJ*w·)k A(:xçvst`*n;굽U]XFf3}mgyNI;os"2^/,ً(͒~Y`fv9NJt>ڵoAԭǍ/wy4{ݢoMn6_!`ֶѲ`e\s^E g`pe!:p; Uʥ 31IZR85%IbrNaz^gYWr !O9O1S@%sv夁YD I;rSl{ PnDbf ں9%(*^bx x5Eɡ:i|_- k:wS1QU> ǒ4𠪥F$+ʪZï,J%^n^s0;!||lV]jgԾʠ.ūp/θqgS._kGW|f]io}c{$;洝Z|eƘ%nj/p8t=<V%ċ`gCٌr,F- x}k{Wejl^c|bq5IraJbz-gbTW9Nכ B^¸Kzy~ru* !r] f<hWi^Ktw\f, ޻Nw٭;XBuN?']CN)N)U|\QQm4E$xØ^J REȥE]R"=jĖJ1"Rvq_*.cN!Fϕ*\T˚!/(̑ %Gs`V>s1߅9w>/iV[~JHi)\:睛+A|79w/CW-G tbkY.e%apesʄQ$yd)oIE1&rPէ_x**IZ::GkKGvԣ_jnsM2y.6}_Nf0O]1Nf˫qY?7w$ЄttJaa |`kieƴ?'?߾)+C5XWIiT#IŸ߻q&%YcK _%h!);N3C9D5%LMO]ݕ #@Hᤏ:9"#y7L 'zp!ǂRH(eTid,f=EqbXXe싅0BµEٖWn79GŇ_ݕx<}?-_8bKex.L 2 5)(%ETJu!I%"6VC{60ceJ{%A3tBmG$f (5FbFl7 y,]uڌEz=j ;P"7TXP%,fw"& TTրsK ZX&t(H*P!㽨@yԈ5!XdG(<`]H2i03g7֢>G!f`<D,>EDZi="9-PR $P@HΐC D'm `("(-PG)85Iiʃ,eqyQEE5=*i&4 _FbF_C\\TsXgV/.¸z\qHBbM2%JHZ=hHBR$Uθ dCaֱ/x@zqk->,mG ,6 ; G?>S#R6IG?#⣃z:ﰋ72]#uM4yV&5ia8ܩo{յVTtUD).*A@4`pƨYq!Y*"N'VLrST50J ^0-d1]6!]7\0d(fxE[X53Asg}5OPwJGF+x rی{`Bp qrN!E80Ԃ6B'ťLڲTC&h`$/euRZHr:9K-003g70L\p=lGNGP+'nWO_,qew[gw<]LUx3r|^/}FA0NeEȑt$D HB4ӑgn{/Gk;}7Eޗ  2Ab\0_*فB^OT/Ɲ* A90y ,5OJ*JrVq*Mk"^KqKiޡivmλq7AȉEB|J9#^f'(٤WTA;#[o?-Y*jO .6ϋ |uoo:캋S+(fylH[x' t2ȾYޏ9dp'3k p.ߣgcw>s~~7/l?g/aq{2^&aS'|Ufɥޘ`ǺV>2gyunQ Y6r4t@<͆T&N eC/kt~3l^$Ni_/k|,.G"uIfr|rg(Ow C)9aTM`85HA-FPl$"XE/>^u*،glʬ`d[٫I~%u$%Ҋe 0cqc9V~DD{ rr_VӔ,_zߗ]GI~ y7 q(]\|?##W:bmsߣ,/lϼ!wuՋZ4Y^'|x7?~7h&AB`۹w'lӫu$Pz<$^||̫LȞOG?B= QݚD*ιѪؐ.SK>[:s횢Iܭ+zeLRۯ4f~ujZ_^)1 YT0JQ<t@E&K1XA.@5+ǽEunc Br(M\Zh UAPp>.gY(!K2Q$B(N(AsDpgXX$JK {g9Qf1;4ǽaZ^HNk1T[Q 0rlkl=+B %xU9̦;y 57DtP%7\_c[[~`9 f`FMӛs VW.8WMJS%8N˖b8NF\(L:zy5!cXn96?4biF"+t6QhK,CtLS/DŘ=)ns7:Lz$ɛULȴ<f*H`4="Zt5=ѿylSI"#۝>?rB AFDI Шd" Vo`>EBBH }~wNRJ$.-!)`Iw1H &$"D zG*H w)DbߏX! \wm#K+D]P鮾 AKщ-;g}lQ$vL6aU_*E) RMcc]h5<@zq|r1)B#?<ɯg| o$]=N\_y$I/brItzBc95,c.A2Zr\?&O0>ǟbРq)#nAص%-(Ny~v3#űVlYUry4ݘӹ2hY\m6ȸP/Z)|9WOWSrd#T/?2AxJd_-?-We+l4gգbkScєE]_!^nb}rL4aG~}\|i="F+-VbγoIu6>8> 6B7V|zqUg"F_O.Ksp.j&_l<NGl}xow?~wx?{#uO` VY 0;xp|v[Vnrke|.qzH/~?W=A-oMdt'eR.cIIă$*%m-e6v@g[D+? C$t~$C(4WdKϭ妴_IW(Ur֕AY(ɝ3Q`MC8,ֽ}t:`ƃ @?2(J$pv-Gq/ ݏ,E\gͼNRv&f|xn Mx1ҝ,树ho*uWJΪ.ҞLyRYJT}Z*i7C #q77Hed [^%•_Hi$w,3yzȠl orD>6^1GGk築ogWb|Ĕ=+w]g_::uk:*'ӗuиY矾.3 ]婆\f}^,]&}4xuG2󎥥{UF/5hntJY/,L 2W@aEy&:v5܈H̿',iJ9XmzcO^ 6&LL>S~9'v9;0/ DRBʲԁilY!1+D !5ra,sEi1ՙJCiB;iuSLeKJm,,}VXfs˝e. k6sXIUJUyDTM]G/LΆlB!OfW7"5g&4O:<1jSb5(FכLg<%Ϯ"IF5̥o/}q9Om#]XҦ'_:BO_NG(/*7)]Ng?}y|3]̾qӬ/gxfh_ʷ'?qęmpƙOhzˣO/-&mbRm*oՇ‹rg"^X)j/OtE, s韟|%H=CEHdOeT||&R .2J󴒴 YL3x3*8z9[==9[iIBL ]f\;Ȥ/3[z]qeJ ܡ~iUV3]Y+]+U͹y qޡ*32 fu&30_jQuw&gBs.Aw9諽o޲᧍Ւ|ūH_< Jfj'f{Lk(-h^Wa~h5 hW%ՀBb~bfNyWm6}w]W~u`. ݟ~CZOnr!o?Z׋Ym56 Zf]_Lؗo;rV-'s #+h/#scũw  pV&_z ݖ$uUW>d~őήOǓ <\և7tAן`U;g3tUoⷸϨ=f2ZQWmrjדrj֑d(kr1[ĕ YtPة00ټo9@wYF.Ǣ%No&!~_^~eKY,[ӫ"ҸB<ө*}YU ٩mƜֱ:ںnˀ:PL/LuJWݥVRVjIOOthŮKWhmp사[tU-K2z.T̥]Zoweb-6 貒7 +ǂnhlphyJOY_icfrzn써E,Vɀdԉ'L$OH.fSK3lߨC0'+|=riY 3rPFH-7}Y G0كU1`aW"B".j4oC W$WPpEjJq5D\I[ HHRY+R#+e* \`3Hr2W2jҖi. 2gb{GҖ}WKq5@\+4V7{t; tqqe+,5 W(WA0[$H}l9cұpgGɕ$} Mq̆UoF cy\H ^wd#i-ھ{E 0fwTUfdDwj.t%h=CWġ' Zdz,hF!Ǧh˛#_w8vC^xwC~4@t=gDW8Еa.t̾ӕ|WHW:r] fCW:Z\JWW҄]B2 آa*v6fPzt%(ҕHfDWi>t%p ] Z}+Ai^!]9EN jnp#֫ͅt%(E+~NO| +A˴t%( F ʛgDWf>Us+AV3*œ {3 F] ZfPPT@W"Yu..W` _V\]P`y5eۣo~8uW tU+7OYyىe~{Z'S.!'bMqzbHOi\]Ջ]/_ 氓*o+ 9zsOmWVNU/WBH616KCeGCL*2с߆:Z􄤐վWח jk z/m{8ft/w11o ~8om^G[~ ~}lTʹmlf1y|'>VD^ /!\_ m|!@{6G^A={={gwڱ1a6{'aA6ݘͶ[;,ߛ=>D|ٍo}ؿMv S}A!Lqӭas9ֽ=cN ;~&;[ 쬝ͭfskAe[ ւ&c]/JTt5s+AkyJP:wWHW<#`<BWք}+Au5=e7n>fPz7/U(t֔qFt% ] hBW@I;] JMztsqN W+\JƽA]B:*R3+1 ZONW2&F bӭvٜ; @lkh,ќJhޕu hw@W2Ozs7忁`6/}j7‹]rvi2;Е9ճ>pKD4d '&= Yov+ՌhZ[3΅m}iAy(~=4-OU] `+f*NW2]B2^ѕ6<s+Akӕܷ/+kNѕu^͇ j (の^]}[of]n}vDϧggڧ7ەWkWi]m\=۷oA!  bv%66pD὏"G'c2 +?٪ug|P\~oߡmly^]"Vs&:n>Zv{f\ R}*뛯_<Y|-/ջ;ɱ%͡6& dL.GvFF8*vB0 ˕7NfćFgh7i__.f?tOT]B٫5`g-%e Z1Uup=fuN)4j̞Ѕ1*UeV[rT{U]*Ͻ ƎXg ܍+o]Us)V,&3Yn͉!z-@JvS:*FZ0i4FJM6;E &Fûgq}E5K[[jn\u ɚ %ehqjJ-pO=!PE2c֎͐7&{PtPj!SRW䵟ᑈ&3˚ ;9#M&fJmw ۽2 IWDISCQ4ޅK@#I{'mv( Vu Ɇ9BuQE~&i?TڧMy.֠8 ,s.̱f >O͹t@Uj5ԺsDAu@r=RNZQ :rjaJ>,lGZ$B$Jp -ڒBBkoF8K}SU.JHVհdDH,8dclsi=iQU|"GfҺXL)&>*C֪+הOQਧn|CJ}Q\` :`Ҟ%E]hTGhOMZ6TnG0Lc*yˌwh|f l")>(uoY{$(X,xM`& AC\oS/%JB`@Y\Op jũ2A@'fU5TtgJAQ _;`F(xgYywDq U86H;8!"M`߇Y;OvۙxaYήe\2 ~􂷪W e]`AZ$ >:%@uPQwJ#ܕLD$W{HVX$ETXPjk=O :#.c؃VsHVWd70R8Xiy@4':AJ[SQx $nGep6᥇I,TGת[D"j=xa&k) _ #Hb"a~<~E勼 d}ȅBOE,[е 2"hѨwPSv d<Ť>&J,c {6vH HiL)},CkJ q[O !)I)юa:DEWHPl5C펁em!I5M(#{t-1@HDn@GJ?F#t&% TQr4$C VAõp:L*BUR@gUW(!0c(.j^ n%C:iϢ;h<\#M8DxS6R;YyVUM ioR 2$`>W3T Z)mm`=8;Ϸ Hg1rglW  Bztqtj 4zR6` ΝTeVۆnME˳"^y"<4z&x31@99KhIeFIJA!)a!/Q[tؚbCiHF#Eɟ&oQ^1 b^8pBkE#GwH ;^AN lU?\2'ҤSL4'a)Prm⊑d 74 8XoCnҘM5"svi X%TǬŹ`F?-P H"%RDUYqU5z]c]Sa\eh١TL3AxSMi{8 p@ G)0|HY?KHSq!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@Em8کp@ .׭t.R6Eq@,r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@ϕ*:Yip@0>zRI8r@ϐ&Hr@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@ϔDYMZM k R`鏝)E9r@UF9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9\X/>FjAYjZj]/Oͥ7/@CJq<[u `D@Jۄ-Eؒ( `K -5G-"k]/%R\\n\ՋO[{JI!j*sUĵ- "%C(9+-U\l*EZ}UH4W\i-2W 05檈+d[UV}H gh"$1WE\Eb])-Es͘+e׋!vXj'i)1WIq+hZNIW;c7~aY2E?x 5[|Rq p ~b? .d *:0]J_<>v Ǯ{?' :"<}TeAQ>Qe@^ZT,j•^@}PW7*٪b ѻR̀3_XACY ʥFGEͤ"NU\=\ j-Cd| -貗uF},W$^U"W"[ymYY1+<ˏk6 e'ۨ% C&V NbeU09lټalpԑ1k HVƺP%ΕJnfSXvgc9\O@R_Vïlz9oEϚ:xdgwp25Bw翑%θ=D(,~}zS-J-Z֤%PUS EJZ` ZdJ\ql*sHi-ghB lq m1W Gl%gh\+XI\MZ5轫"-As$Jji*,)Zsj\)Fs ͕26 hQ0X񮊴SEJ\=CsuE,ۓj/*sU5GosW\Sh*+sUմ-Hk>^]=GseִM2e5UVc7WEFU͕ܲe dƒ0L:gk?EX_. w* Q+N<ĂO 1WGUv>쌒jqQnGjVh r]X̓Yzj;K7 2|QOEn\n dTo=۷~IU/cti}w3,a2f=r?[n*@qڛ3|mnc&I{FmWW>j3s+͍7K0|N.:AmEp.Įi]/o+AuڵԜٕRSfwpwKa;//,§ՀoWq](oswpzYs6zk;Ej8a8 v>]AgLL)t:W \[Aof;~7> /yC_9aծYL/jfx6/fn_OӬj/6.,q؛/ئ:!q =ON aQͳ8賂ӫWwU1f2.%LG[r07! WjaEw}Zoz!P1?#&GLiY#$SnwnҊ$MJydYGC/1oח=+so=Ć)Kͬ Qs*U/Y16|Hlw߂6ozxnS/5]pԭ *qʒlS5˭ 2kELXZsM$^Q-2W!FS&ƒGeP$ZHΔ,<"PO1(;8B5Dk㞆{-!7žע*}q5F-sׇ4p*u? ?@hig߼])W(;NQ9D)'i+esg.{%ƿta1D-o N=]ҨCTt"XU1cx%wU$çWxa1yHTc*1O՘'jB, AXppb=%4q}Z38SN'm<٫cĥs.A$؂,BFcBfk_ɄpMV<$s?icnx5ͼWqʕj }w Q9A Oat>}.`+-/I cEe̕U0Z{n7Ч|˶>So}wֹzeKt2P3sPgK)9m$4G۫<*% i!HScrGm`*5U ቚH 2eb!M=]ycrSh0s^Bܬv] n#{6 <b妓=SzR?]Ho #JPM!Hx+az)1@eF8s{",ZG]™depAl3995c2(`P@ܞGĹ{q<'y^WZ;fg ,k*:W74kKJWo)qrkCMn)#EܐaN[7ETC%RxҚ['R`"B8K6`L\\S*ɭ5<~x $u;Ժy[PyyR\ bՕǂ̛`b]zrV [g5#yfwNW>D*Ma!gqzLcNSGM y< {ɲW_^9~|ID> aF)%_2ewH#&ؽf&.%\hh8AxeR11i5eGmiD 2ǿG\ڕ{pT*Rg>n/W;.2Oox|tC?-W)L:Gn'!(Y Z$I"Ll3M!R=+O|.s0C ]/Лk"mPyi֒W}itֆ 'B(0"eh^u:k}iFh`5ۨ4Z8g7-2 JJ6 "DHiƔ@?pQ۟jjf 2$RB5'pΦD'5wJ*OQ:cYdT&Bjfp y mJ7և" q$B`N@QI2[C4XM(^rJNI.>-7{3K?gN?n/R LdJ&}tj?4J`I:‡rS Køߋŕ:aZ'盟_3ۀ% F}'@0P2ℝ\ZqNy8qqޗ`$l5Q$9Bn*/5}&Wik/ n}ݻEi0mا.[CS }3TҗΥ ɼe_5^mN'W(g*ktNVwA篫UF)0ѻ+Uko;W43:MOsF%g! /fmm P/I>pe{AȆl'% bb,m,,a 46 ZbɇޗECWev-Zo,K]TrSu O1R >?GܛGUu@ ˹@2rC f֣lY͚wU v' 6 r0D-ՏץBz!cmB s6E}UBmxi9"c#]&vأIΕஈ1Qb)#-?8ZӒ0NzUzI'X u2#:1Ky ϕ!,$Vz؜BRB:{y3Nw]s {Ӊ+eNc܍u@3 n?9/wB}s붗/7V鲤g[KwtGY&o}rJ_6TR}L_US AdR4 Wgގۑ E>6& d62Joӄz%)=cfܣm`qnMmmoٴng'޷@̦ɨw3%J:ݶDsrD@ޑB8RV?|},g*D| ZU*^DDBv@ѸʁܼCɲdT!STLQ-p h"kmH?;Rq&%^cK6#IJ73CŇDٓ-q=5=U_WuW-7 1xi ^.xk#^C9y9( T*)#JIB+UN6Hk EaQĀjb NPo "RHF'S69k5)* g\.bə\7n8L9'[[S:oquvkcgٻivK,{דӭBnm>gI6mν}!1l]csm{|KWmͣt-klYwں{o Vvyf}Ȕ[^ݜ{̻#k J/%>R"#¨yvZĖ2DG$rRU8&T*Qʤ }?A:#C 4b{ $K !Ubtb+bJTJ>GIpV&-{$LG#ux ѠDqqAX!VH Az1qvHhlqt迏+sjȱbֺI_y{J7ֻETM\o/_}&Jt?ΓP@VEl!0$.$KC<Ⱦ YIǹ{,_gZ`\3"5u[C] gR ;ʩBи>A1 K30hҁLxqI .1h( QޢAƐc Br(M `$Q.i1qk =a %DrZ'RyY\bLDi!do,0 |Nyfhڧvi,HOՖ)fl . |ئg,trsΔRC#B`ciy_{9„}7!/LKfrR*^ԋtT-wN,Zpj=2&wʝ1_ntҹ[L/6g٭9 F4zEfTڻTRqDL2J1njROGvk-Q)1D40biBRݩ$Oc(iFҞVQ4[uy "D V 51:EI1P`Z$b\6 L%SŬ`Q«hq{欯(yqOφFo=_nNVw~i (BzἔC%G<9ǣNMPJTLMh‰$4YTjBI1F3͙DCLIF0xsTo+4xwԝQg'rp"#t)ZIB}!0((ElB*َ|bXXL30B{D狋I.i/<8]ػmPhhoORYtS<&ښZ[cE%E-dž6Q24Rقa pg6^ D0#`C5M1P`)q#Jy(]L;ګ%P{v;0Z!DnKY# 8DLP [B\JDzlBށ#dE{5bM+xcUnu!@|v8 ɞKCFD#b%[sL%D Iq;[-B)Xe( Q ޜY+- W[?ҭC8{'^!l&B1j)f;9א,q `Zi+&UALJ_@i}n3qX~?fz,-(<2ʈ$ X XK Jc0tt>r*H^G Bۇ8{`送Bp qrNq\+aT+mNKCe]inIëu~s:[bs7j॑O=1D.m`@b[5'gxRiOd=F)lgFoGhT$PX.GNGb1Yh &jfF<&bV~֪7zӵXc% m^NQn CBR 5 P',.pwZ$x䁰 ֙c\D-U0t,gN҄ȼ?RD0~QRf=S]<) oG].^52~{;H'qH% . Vu9EJ~RL9Pec.*B&~myuwTƯnzpENuVYq&Gd\e[O-~|{|2MN:t1ZT1 RzTp G~4E#6o8MI.(;c:ҁ~p1=\ɼh_0;m>?dl8'_g2ߣnwlu+dv|XS3pˎOO߼=q8X}x jpt:G<NT#&d2ė5|Mf^8'Xo/|giI/>t6YLdc\ ~Y)~JJMLT9m)Y3e?W1[ ; . ]5yx}vհez׮8ʓI* * SG)ryLxfNycpm0JR]o9W6E 2s3 Y쇽AGĖy[l=,?ԶR$U5* Yw&&q)0} ^v{b2VI4 yYLā8> &nT{Mu(NF2+ +مHɣ@ L۳-ƤMp2In)kpEJi{П]aWbv2\rb+X8}uV}|̻8Xqss@b|IV=1_~x9]sz/؈(6O& U_) ɥXRM:za}H:F@&! 2IyMdPn []xY7x1h}բh~#Z'$!i^ϗmr6,bSy5da&+A~ěX:7՞-L//3bQOl7`ן $b4hJäW*aJvBA&Z] +4ܩ @tӁY;QOW䟆ZsڙC8N HZ r\x.uCԦiɑ;GCpMd܀ҾTdDUSg\~u~uo~u#:~e(H=OPY5p| RhRAYLv+[~KiTמY2,>U#⥞Fnedrg5|ݻɢ="7]/(. -`PBL$VT p`U'kqZiF6{V֍#c:D6rF2m9XBz s N) u=5c)N ~I+!E& B[WiS0(BB2L-FI#(wl5bF-Xr*P@ʐ nh>ŵ\;d޶G'm&i<)ީij>wepZC2%WO絖ůxb(aUSK&f(W3^IujWuN3w=^=ݦʯ?\Nxƚ<|tҷ O$/i;F@%-YoΗ56fii3 tkFq^+ˬ7mχ+^9U%V:{e.yW+4X9ou\LG<<}Mna jűŊZǃ4}Ľ\T(ܚv; YH!f+m4Un:y&#&mQQH-jY` %Q2)QįW:[e (Gkv|q7/= g7B2R9*>*.CN:9*JDF%1G֐y銻_4cKC|j9.j 1l|@li"&Z0jyj=~07[>a<ۆfWnf!Bq(m)p7{w S` F2N+VTD: EpB0Z+uT ͪM2iB3zI Sgf6{cR"Ǝi]glP0AڧXQRfh7>rU]K7-p0'x괱Wy,mRvoUvq39?StGv@6Y%>.N_h[Xu&HM O=-A˓//+QF$gmqVY1y1!3Ĝe9@Bnѡ6!i(bL/ mbLR"(="btG#t]~3q֣C- V qS7$zq/<"\ ࣇ1-ʅ]ȦOv /79 G'ǂrהKG) F6D9%IUDKu&Z^v&ΆRۼOrۃ£/mhI朼x6jK޺TTm̮ zlYAt]*/.gN>إk+rh 2T ^;'Vfw7;Y⥽yھ]|䛛x>{Ŋͮ=Q,=@Cb\}W6ϧgn@ަ宻? :TznLݻZLN\}xRh@2l 0`OIॐ?IWbI7U=IJIzd#$@ۢ֫fE)ؚ 5iSP bY%(%bB0J1!I"y(*mr1Cch. Bhbsej+We "]V=L06(@LW#ٓ6cٓ='Uʅٓ(~*h ΔnóA K>:X]heDPQXdWP=/R}K%I J$sdKt&a|(e@m-V7S^7L6bHsj[o$s gU3J*i;ߘ[Rr7gaDyAA[t%ifCG>RJ81zb= LiG:%. "Mm39KA)Sl4&PLfcfKiP=eJ jMC(w :gCv󉫋_(0F aq<=<;M]n/Vi/yW2vHR&W"IZ /[l +tPeIQYf%T^2pLX&(֑(]('JRי :gQ4}\cSR=^i~#Mݢbv1>hCԚPۜeX7-=w}&Z7cSl៞BsJyd# %Hk}~gmسSխؓQջuPp<4>|b/|XwH˷Rgϟm1zO@_{ͮ;7]Յr /|avVw:V'] <{nYl8{d 6HW(PVUT&\7!::Zt90w1$>|=>ڀh1U0Daxj2a8qp̤4 Ld!As}uyEiFH5En3$D yf W;e4Au; ^ ՀXU܊=q3cwGSܜ,=n>᧵XD5Ch}bCڄdD`QIx=С'5%cb(H=rwlx.Osoz|sG.k+cN AX Eiш&d I-OW`xyrǿכ{-kIPH JAլ$&ng#(zN8Mj㳻!mF}) vb]/Aߞvt rO#P M"QP輢2Ţs#ZX!">%"2"4PHTs#6X%̤R̵I)@rs (c (rUzLω3.UvAo7H}xr8o.܉Ԃ#\f&Sf e1NSRQJfYTT:7͛/vu$WhQ *rB⌼o6P![rtF~c"ջ6.EWa>&_A6`I<O>ޛ! |&?i=dLf//c?a6X7vDn^~?4g{7g]Yh-o)A98p:Bp« 8<Π{M'c78S=h_ILgMFʿׅ9 K&a-}btB͈KNBԍPEӯ5ήz32 t푎5Q(S; !8Xwp4~5k^Oާ=޽A=lsqZMҠx/xqK7_:[91, l2SyLbz{CWM^#i2)S6lKĹ{9˭)C,3hbb'}ܯ8 Bl'4 r>U Mid@DVa=c.pFRU@CJb #52(B;hNxMgNUէxKs1Y ΀TS/zrZa AHΜ`O9O[!H :-Tf%^ii|HBL !Ǖp)m&\|†5cG@ 4&q$Pُ-ivyZZ,fy @lU[{"հX}\ЦjgUG5ŗ=VN8ܸ0͐"c BB,Z[JL-pZ1KIܗ 9H|υ|;;@b2fQT@: ii% jV L}E\ se)8KDINgk-*hw8'q*%EZ4!Bq4(]R4 ʼI<+ˌT)R*y 6Y-ړ~|`p;622@1ٍJ/d8]3%j6 Q0EM53XR?p2]D5꾢uQ9I"T!*^"ʝO% 5pd [Wr6&έRȰ, Ƹ#aREbD}@rg2g;1Xh-z]OfsG`6<aLyz>GIĸhҬւFFᔔ< rt1VGTxXEA$RH'3%)'*qJ6Z_ * 1boո0~XiS.0]@{K?$*p)BSϘ8P=|:DMWL#,8ZI}/b} Wn)TnZU ܿ~o/Bu dM(૏a\-4h~1fIU64r}ݯG>wgK6p@ߐQhkرe p^~p#)!BX$5#BّVz/}=5&QJ#68R$#Y@ ȟc.*5X@U͎u 7RUl z+SȦ,"RdSlJM))E6Ȧٔ"RdSJ)"RdSlJM))E6Ȧٔ"RPȦٔ"RdS "ٔ"RdSlJ"RdSlJM))E6ȦYnr!G[3j2fƄ!j p$D%GVSGWIOaҁhtEr9qj,Hbe0*UW4YBRp:Z)4YE8եr!*61M,c4 [^az9ljH<y_O"28{|$,ri}F!/(c@B(BYh4ʢ;U r׍ 0`x`@^BipPVEA[*Ӣa(r 7y[o lFNO1M/vWMz*@%#n?M'c쭻 ܏أ@41DM' ฐJ99D 宍g2[\=SQ h[S+"DQZ!DODQA,ȓqϿ*T x轞.p`1m„<(HB4G/UK9B@7qyPޤ=>+Afwܑn7LFfK9@<9ٻFndWދ~ȇIY df`hk-K=2 [QTwE+'tl$\57l0-ܼp1h]2#|yysB0Q7V>B)\p볉c!Рo"p^i#mА:YxJ2N7uO&0ʿ.&u.N~Dqn ;r}qv??Iu̎p=OϏ/xi l>1g_hTxQ9x:R &'T#&qV8EʞH|Y ?G8r ̫]&CbK&|\+6<(duW A)9bT#O b%u_ _hkTb^:7+ Bx ߡ5!8:JPb17:bƖT LiqehJ"ÓŴXkz9[-wql?=>_-*¸W:-~[i/jqK5kr6(p%l%P`ӵjD˻+"nzu]f9l*WR~gbh .=>Tqzv#XDy[sG620;RZbwi>$y)X.Fp+1d1Jt,6̛ev3i)c3L-Ԕxs~j0,;)gv>QҐn*ӭcBtW&M[.p^x߼)r+8-)5hx*A$B;D s kr\qbmލ{vX:rNGVlaSw;>4WR*4mDSМ>2M6(K9Q(!)v3y9br( ϝ[\xs!+>12s=žh$Usʟx-SO. dJ.(_I/u'ww|wr/Ѯr/i & ,Dxr$DzMByDBTBETIА_,d#<yi˵)I"i%Y4Adcڙ8ȥPop>bW=[gI[%Ƌi [UEn_wD tJ+^2P[)3*,c0-Eht+De64uk\h|k{c8Z,AE@95BC)ZLJ AzM9MR,ac%Dhvpt$rp$Q19@lbk#/) . #6@ VOYMQaG$DF'e^:,)֫j`9|ױC1Dd62CWAIri "?lquBS=M!d"?E72B[mh+{p={wā;alS %se]xWBDJι/D;yȥ0|KL{ ң 6շ!קM/@vMIhPh :-Qy "$y[1c<-ҕӗkN0^ c5{(PxwbWQQ6vr5 w?]^}nތ.Gw,S[79ԧu`ֶ9:)y)p^ Э|[ ڰVP}֞V3uoZ1w Iq. Qx9sX{t6=i-K0@$攲6pi3V5x͉T5V(\rI}T]'Qu8'Nꐸ0w(ZMGÔ _=wی]~|{Tʷ9eA&o6 M8aH[Y(drjxǩY U: gɣ$f85c2(PR 3]soY_Oyj;W|dm7bST>;'Gn!^F7[qwKQ錵Rwa`&F:ZW&>5NXn9'q@țNj QG!"tJh%TIn<48 @ѳ%=hU$h `'X|wq"]<-W΄M{˜!j#o\h2(q>X1yĭTRb5ҚSanGIl}}v? `@+ RFjA jfdFNznr++xNO\,YN!,ĜZe8k@&̛|9:TV KW>:JX}␧|Y" Н͙=SӧJi|PJy>O]k'Ui1DU镐HМ>:/-Rd}$d62J ]-qP$h!3egt{Bng{~tJˠ\˵α3Lo91 ?818ݻ8Kt!ˆQ͎VÌ.y\ėy^ U*A zC1eO0o_tIJ%@Z+P"-1Fm1K3tycBaw Y ɞzg @úG1jwr8 3_ĵl|VIIaG/tW!ݳ\-xQ᠕Ka*ErL2 p8|~]*L]n\7? 6"H̉:QPX(% 1\YrD: <Ϩl$S{_8%3xC-@^R!Db:Q"u;˝^iT~g<0AcaLgC&Cgj*y{:B('NWO8#6 p%fValz<媟1W]f^|u19bd>l,m4 /J(+Vqz5㆜Obs;dw]oGW]կj.Y,nHF?meI%C>DdS"q[ gjzd6\MϣjTl+6NM&ЧJOx٣gvose+z3Jr,֒Z'3,߸K,^B^G]CӾ; V֔*ƕLr:wՌش޻C&rܱVZ$9=,PtMo{w5<|ɇ4f=z^D/bKkt~lg!76bKk_W6_^^X"x>gw7_rsͣ_n_v~r廓_|<ÕWܬ-n/{]tm;}u˅5p?zs+O!b_kƚ>󈖻ss_Qz}畵a6/;B|L>v<Pu-d|)芪q‚DG𘀮m4be, 8eAwjA2{ؑdY`JFqUNR6MٛRcH7 mS\EҞdEXR'BR^ 7NxѬ8;PpBc~H+Q^d5U_qVz-I_t}L4 Mui}sr 1_}"8's&ur݁F%5XwF:)'r*2"Pu3ԕM3-Y9Z4o.yqD5+:6(7-Qd5;%EժDuvv^= xkW[7[L]\Wޕi^ݪS@RITNF4}h/S}#&YEGgټ+eރ`i{r?a(RgOsg%>󹕴/4wU2e]2t;9J9_RDjIM_R 0l-WH{2ҩ(\tpi QaT_(΍=D{0p׮>,wEÔ{2_x'wRSRfP=tTH YTS#c4.В IH /O)~ɿ|; a~ U%@e NAΡ_ .ea 8&MM %H4`f1l,DfH',S0޻\c+$"sU$1me~VT.(LI0 '0Qeqofqx'pnwb]__'8U_ܤ7whge)I+K2شDd+-ƒZ@cMTP)F ViXes R]EQ}X IFZ%%݈k pcֱ)\k+'m ]S򶫑j7'gm: Yf+;-~[XzՁE{}{o;x%,P%TԵj$ %} 9i%IEO?$g'Aw"wp.e6u{ 6?DVџ_tE>O# ^V(kTEqPK+)TN#G[b""UbT)+!":VG7m 2\ܿs+2~uu~f"ݻ^InՋgoncՋ]OWR XmyPfRau,V'&%5FVɄ9e6{⣰ީh,5}vU~zq?eI`_j퍟>[)FM|]2R6k짜W|?iqo>lo>:.^cˌ]LS@3{گ._4})Kmk_]aƈ{%c" dQ@lSBV! ر_Iո8hunwFF]Ad)d;MHJ%ou O- d%˳ T{lH̢,^TS2ȢmPĒY+lwEq+QXi1|2:i]`r&"giiLrBhwLAY|뫋q $aegdžy"HWm5`g=Skö6\ I4S欈Ka7_P`7~zuX҂&"ߞx`kz)Hcz]0'_ttZRmWڡ%$a *Zjg0ZUR%(nH%Ԇ`NJll{ءǗcoVMذ4d g 3U C>Je"Rmn$!x'؀5gGO{'qٖfAւBV@ɸ1j%q&Jh}۞\p&aYS#ĬE%t 8 5 Rfa|{O"z5ӓz1+-{--:n0kK֏^z׵;oؖ+UOjP5>=~6[յ$:@Qik)Be hjcP6EFCl jCdɑ"\*.`9Q)dVMA526g}f\6ӌ}5BX>ʌהޜ+on_|dNo Glm<Ӕ(IŶ9LYJnx_@kSȀLiؒdQ< %VؔFQɢKo'd3%fv-5FflGpuV\SAfڱ'jh(FڰRҁd r& ]A ,jTI2c =d(̖&e fh82F>6Ya3qa/JaW`<Dl""4FDq@čA+=d+d$*bPj7J Uԩ)":!Ugdd KřB$M6`eٓFebm {UI77SZl%"6Epq͑E_t)K@ E ^HP C0L&t3\<. 6ӎ}P6C–7ds?N7`v.CDяhTgT6gTtŵR%qUXJ'PI^Ccwp2h?;n- {@OKe#ĤAّ]}Vlˠ?jPg.ZcPKtc=OJ'{v]8l;a}.ۜy[n\_G28U 8OGҐ<8bt@1(IbDJ41E(Put0 x BFs+1`";nʙ>^D'-Fj}[ǔE$H(jMt5v֐㜢Y8;4* ) 0wZvL>n(_glLn~ uyy.oE ,/*?'x N樍%hf"tzkIxv4@Z狣pm)K)>Oh7I2p4_-6tazwZUc7#?jٟw\|Mռ3vӹO^?{W6 -v)}Yfv$yl%EGA&öC,-ŮnVUj:ɖrŎƙ}FO0K>gv4U|XXfY7#"gb@2Dzn 9ޕ>eFRVK̖#*mRI_Gf>|._b1ŌjMNC}S|LBtׂ`;0eݣ+׽ ɷbm׽*/LG1)bJOzS^BwYBw^V̎&=`ha07YjWCHƓlF0뾟1\r^A9m?D(Feu*`n~E#'Nҋㄶşg%o$#Dݰ?X(AGdϳ Qst*WFi0Ni=n8IX{:]𭚻,kn{NjD70'(-kl A?]',VTزhLX1 uA8E `Nn7(&l)!ذ^Y OOO l qZNu\Jr&̍('B#8*X+MJ$f)ϸ40&nN_jO e?]gyΗ,r}{Ttr1b7QDVI|>Ϧ톉l$T~7PT1Zth6mv7'RwY[51IUʃiܚ Z޷]> oĤ%2F̗:ZGܛwѯyM[nj O٨ՈU J+-`S琋Q+0{zie8Rۨfvanm:C7m(梁@v zŌstN#ОHI*&L4H%̻}cߌKi> P25jƒUHޜ芡]OfSV CXjkqaxW2IMFԟoXbs˭#)26ƆwN Eނҽ/269nM5wHN+4A0ɵ r Y3f4j|\~4‚SB]r*Z^ߞl Zކl0rEpҧ0f.-`ML,mBw.u|7ѧa Kob帝3|2 EVݰӖRrפG8\$` ~~~3r,KVAtWV}oޞɝA=7C⾛-,i埽&xG-,:RjsX"DyaLIˌ58? C!fH ^ 1[,} *f^ =׬} &uam֞D!K/h 3 z;\,[JL.0" l=lmlvyAnd;H`Y@Fh1Bh()Q!VBYI4F`)Saz"nwL18ItL_l[᩹ nR9¿CPV/{rJο>bT(p_k8@(FK 52otJeDHtܴG =ٝj4+&H`ED}Vdl}'FIEYxgKl0g X5^rÜ9Q [cI"!tk<ĪQeըǶjԣY5I 94@+(Bif_#i"P2B14˵17#pJJK[*W)/ZG&HNrg8 KR5TfSRyTX{hm]9$5΍ʚDPǺ s3UŎrHyqՈ G`ÀB)YAt-)G)Mu$ND4BkM̨3hR&`p'%1rn8 d4Ś%{X9C>w'n>wWcSF vEkYDzHqI*AiLne[0az Q4 E>`(wgo.lomIN-ڊz㿩wƷ_V&`E$(iјJ,IB3-wa,XԊuZ OPFJ;f)K)µς-@a1rh. ǥ^\K(60|Bn/2n^"5OS.r)u2QABƃV@,(ܱdhT@lƐ::Bhªq(Wq Ѱ4F}xyI~X¢saqʭ F2&27`fxA9Kpd% Ʌx+,SٗCsE 8:ltу\jC}@\{" KC&H4R)6^w&5 Lrc°daD ÖtKAwf(cis{r"ڍcΌe]rY5F֧8WBuD:E~>6'IGN2O"K$v;D,mV' OQC *[db|B^V/7Y,#)d.9T03,*͸Jb,A"A$ۣcl'(ֳYX^Ifl^e@|d ju@aŀB"X{a}hTO0 aIQ@ecĔ9fύRJ(cP""%Q 5,H۸r+d y}^AFv ~3m.p\Ùt1IN 흆YVԖ&!IMO|?ָhY{X jX)|._+<\5ȣde0QFLaiX+0S-Hg:RP4\tJ:-~+BP膴\Sa3B~\&zE(\UX,g'MijaTx×ll ERŧy&vfK.$";N204 &M‡9gɕ:ЄɓҘY;. .蒘 ?Y9TvA6,kvOp90|Ih^{vjufyRmQ0iJ>,&zџ08)embu֌U ^:Ma!#chңS_1K#٨ ͽCx] Oo/L?߼ۗ?~{:*֒@L¯#@>k]wQS]Cbw69jүhr5^Vm-B(MՉ?r#Uz͖ժ Vq?> v:\+8㫷.|B`@%} Z1n.{Idg1($0~p%9D 7M畘ԎsS.C tlbɉPyy!=_pq6@4- gk(𵕂cp* 1*rhpmM3:U18К-oo4xVNA<`U%^Td%a'ٳ_M|zc"/=}Y22'5y\yio~ ɗ9|W)rPU?QKn]xtE%x e`leZbD2nE˳nUX \( m0&{$jz9m-'y9-]ު  B7Tn5nT|ytgû՝EEE;d8-o:Ȏ_ǟtA큙]5ܰwQ'n̾5$5e'?jٷ_=Hռz!fv9秳D&ixs0F1s `nL )&mnp~ %FH:=CHBݵ#N)ZBTWvVin]K|4Uψjs}&SMnz{.+wy|.1'c ߛBG~8=*^Hr~qϚ8}AEt\Ks.7<Oļ6,hG5e^sO0o_4QBDb1xAiʜ3T#eށMvm894t~IC?x p'cCξ6?#.uB=AL?{1WyV,K${v6}-[Lْ 2i]uYU?ɶ_X̫tnN^W"S/%mm-ZѦ#:-z:z >S*Wp GGWr&x֯f=!pdjӮ[G3 %mIHBxQ&A*޹**IhsfSExI6su+AQ8-p#>=r?zm@i~jTBD7t{QzK~~E?6.rs_}l>/5Ye9)rvu| ;,|[Zšd5zds^ɥB.SZw[jyrG6&b75 _&)ɪ-lTJ"tL$1}h$gJZ6R9KVPKgz]-{cսcֱSv0hfEv.Gmzןm4*[ubWM$%l_lg3 y+sU RՁ1jz$""ʠ7+9tm/>L5 U|^ܔzc+|{z+{ƹe^!Ǹ.g},QM( eNFD0L)a\g8p#Dpr{5 x /G#WנS[5lqΡBW a(CŠupaUC7d@f m h/&fk> ہVI[̓yP4r1ui0Fi)|t 48Ztr8*bpJWCi!jpt 1O%(yv%Vs ~RsO9l>O `Z cIs}^Rʝi<6$3\%ɥQkEɧJrq3*o' lll7mk8a}K.~7WU;,=_Co痥ki΁=(sL]uPe P#wz$L0yC8ic4 hj,)hj'jFwi }Hu=Ǔgw4ԨS@eӔ3) 9GQdsϨ Y{NN9L۲\Qm1QB%LgIO!Z橏¡h%T9|Vl|Rw(ϰbmن8QoGymW8in?%KgMXCQC{!ϥ >)$0^z@'}ىșc(2*@ 9./w4BOPd 6HI`|EGX6!HFblQW)4cW,4:,+1ΞoS߶^̀˓c0bKeM\dhkRjmQ]K'&*!X&T*8x 9س,&W+vLb(0w3J7N-%fvaX1iǎ\P`j 5B $1N$j {k%t,&4(H*0BhϣF x"q.$n0g3A" b)iaD"vXI7{CIh!I1.V :iP Vx69Jc((ZqAD<I3Tx|tblFįC\;^g1-Ya\d.v>oL!4!H%{"u ")rp$c.iǮx ! a1j. K~dmI /E^?:##JX+Q9t)%e)J`tI3q'Es'G3wTk$OG QK5 ϹdpS8J[I0MѨo9A߭>gqF6_ޏc4_UFv! @`5R)`GQ+32rK!jQP!bu9rt:BHACjf$!ȳo;/f{Uqkf?J8$ԓґ]m x PMG28't.yK'{#e { 3R^#T.}'0S)* A90y ,Ts'[ITE߱\v 8N y*ȊujO'6eیOK]dFHwa~݉6J\nB)ϳx llqҳ !L~m |t?^B0~O^Yȗޏ2q Ȇio4e[-~3;NGI!t=N{i]M DUAf%ijsI<('}|ڦ}pq?f^/C<=ן_d49pӟ;_' Ye_npoyt?MPƣabg5>q=I3/x2h8;egR?/㵛0Oĉী*p[BO|1{[ǿ^de{K~CUR LI *`+o04Ӱp|=k7}]W,.W\|Z߿0FK(tkE+-rnƏuAZYb@?>H[Z6 A*Z絻}ˡxw͘9,{=̏Τ}$IOI3Vu^ ]yu򍺎=mgx{8(%BQG$bJh+X1Ey05`XB,gY_; i]3JNB?C|Rު[z6(i_>'ͥ@"{aMEQh(T_0Ӂ^YvrcD>ͳtÜN 0ofm;p"5 Ԏڬn'1a;mg5m/;||z'N=],Dmnmv:ֿxA] T< 㔰G-)DrOc$\Que(;?g*|fySc6bV{ ݛʇ^.m6R_p.s2)^&TS&`Tf$cӘNh4Y KmB4[&9$A>ٍj͘[4Iibl~h4K+N}_y 긢,_)BqPڤnlcm;+|e>"<\f"7J")6^ ]o#I+{^ɞA`B?%)Ròr3C9"M %DΣkzj쀥ݓo^~k~[9>t脅]@Jhi ,I!*(QD@N)CS*ɭ5 ?uz 5כK~rŲ0Vb8ն 3Z{)-O~;[8oM)sJE X{W> =(OvTqʒdS)h-QkNB9E撛Hcq;9qCϏi S[Err1hp{c/Naq"O,sF;ٜ%nX ɿQ} ; TܖSoW=JnPـo{~4P<YơfyLK7~I)#yZ:*1;JWxa{VVc՘f5XM%!!P+NH^#H]3J 1!rݦRRƹt[H:i$BFcBbk4ʇ@c`ԙ8ہY HoH>x3t&Mn6=y"t}hzZ@9eq\puVJPD'uzsvmDN꧒{HV*DAa4\@80H-;ޫ@˵ qeTvOi<6:woy:8k/ࣇO1`/!f,wYm2L xRMR/xV|$G^S^N'aa\О8U2LȒѸͧ-2!hUIbA/|M+~CSxS4Rx(n[/gT AqxoHz=dSq…ny o^+7 ަPABBJj#;r (EyI%A= |Ra;cK8բj=xljIپY1.P;pK턺DU̠)ְ垛dKƫ唵WK֋#佢Eot^YsA'LXTg٭Eko~Tbk߮η'wQWgq39TG ic'~M=oqN0rpʽP2"Łi1 )*sT#2rc]nP%K.7\%U9љ*Q#8knQ~ I3ᝊL lrFjL0O ԀE<2:pBFQN8e%#O:g{J3/S؉ݏ.oyj\5cT6vS0u܆,@LWZ"Tզ[&m:dGW*nI),GHu4剨vz8pgˣ.S;I%K;T ojM5#Fa5 RB`5"Qc :SzfKYW,ɥWi(Q!08(Q`g828+!oΛRe?Ma44]W9QN:zi@6$ Վ)fha,19ۛd]urTR֐w2?aq^ŕqvP]W^?;*"|{w=cZiasj09]ؑ;39]371nK+5L$w}8!,_;ND*h  S4rBC4,\3B;N'lǺs-g8dikYOE.ۈb`ZZ\0-BvPD2V`TFCEe- G!ܧc9ޘ{66+.?-3M D ,&!UN`!!1&L~?$$$RB5'@t˦9y+tN`T"BjfzEXqw)4CۏX!.$5DIP}c:0"tb7;:s*% 0.ٛb>Anpb|]Հsj|?ޅ +?gôgǮőPͧOZ6ljxSo!-a݄$)C'䊜k9Mp7'N'0eslۨ#+ B7ba*mRx2gUt,"Ͳ7E+Fp]yNMq媉%Op+(VΆӓYMZFMmE >9RMLE}zvMs}Uu*8 /t4_}m߮8Hp~Ϫf//hٹCL[rsKmͰfpZjmfuFE0WsT{l7޶9y8uk!ZmyVErƤeb  K>OB|W*'b\~4rM_ F" .p8闿>巯?_syuwpğg_}VO3L`"0yM[Uޢin.M.]U]>rK4VvR ??|3BbZȋ-V̆ FKPlT:`ZT+-XM!1~@ ܟVaEu4I<t#g_9OB#+-VԨn*B ec1HA hv8,Ҟߏpq6h`r "HUZ (6A`PΙ$E8nu0_MhMWcBk(%Ηs9gCw&t)_z46RR<\; JrfAd_^ ))3C4Rd} U2J \-qP$Ȑ3f; @&h4iG(1 B2*"Qw:gwkc}7ڪnڪ;%&0->zIx}$VأBGp+ٔjҮT&8YJ_a +:w>BzE. EM]p:DĀXZ-Ea,N!C=Ku׃E1U* ow޴7ZhjQzUP<~jzM7e_%A0P FU*qPObjjkh^}n~G-,+8,TXa-,I-x=6eޭ}9VBVה ȑ^?Vc L,n~PОsK*,L `tJSȈwU }Jڋw0M2K{S%vA#fZ{ˑ_SG,X$ ȗ 3_v೭i[Hj)^I,)Kjs [UުSdZ[IJm*g߸'pdR<JIa^!m>r핃eU-; vJgςK;frS7"%79D+1[*-ܭDit :z~4;>!l疤mKaj0f,B9/3؊#`\q 0_f-`J!NӼӴ̽=ZV!.RҕkC SJ^09?|1HQ{0A^vhٽq.o64*M\\s6ZO]lWAJ4JU}c_+g_e(u ~5 P\R1ye(5 YNs(Wٴv; Pc*@˖;@YhƵRt\.AZ?9,B{f'lCɒE^HCiKQ\y9;'/pN~뉷B!}g^ޞ[Dz5]$JYE˪dv{V}ܕHLK^JUaDgK&5}JrE<:--He:&}rЊ˖7!m 4)}Bg4E.×KYHsɭsl@=9[4mrЧ%4Z , CcEg- Rx(: &fFW\R2u2>4&Q@ADƴ9ϒ1R.vV5s^ M43Z8rQк7pu8Kʒ^$H!^i!hN\5'^ρ{cBɘц,$r%hJeijTW(;Je2^;l ZrjR߉]PV-ϴ#-!&ӈn0nUҤYN) t䪆ғF}EH;ktQ8.(=m>4- g_~،ƫ ? 7y/oc%^z'ioM>͟'I%Ia*j\?~LI'g鍊/ϣϣ qu#&'wgr˃7k׆/#tfݭڙ_ݪiFAi&֩Q#Q* Bmh\5&Wn:~4yA~ CI$uǁxJ) Mιѩ/'W'OhOFўQNs [#MAJwdȒ QEakl*^`3ʪhJba@eHYr1,΄Z8ɡL#Ӂsʫhq{]&_OxO׸'8f2ݟ|먍~;Q|u-C#'SQdOx9if[$#9R(0J{N &d$,xmu"0NF{ɨlCR%xxxH)HJ"^Eo Pf-siY$ZOI5?d~b1 `\J*jE#s)ϓRP,vMd3ڠˎW!Bb0IEn )b$tH&Im}G(?dR9#c=R yƾXh+c!6,<(0Rŋ;JϾ²71sWW߯ʈ#$U5qjN%'1.s 4Es(3m(ɞI66Ad,lC1%keǣb jW=Q{]´j P1حl bLrpJ82!eP16.P.xT(C@YtĂI`DSiƜYE m2Uj<}R*,x2X?ED^yCĆ[%9JI,6"3eY 觧AP~7rƴʍٟY2> $472aI 0AP[P9;\&zԞpqAvJkռd_\qQ4\lEf6r,j K\q'C@^~a q)p[B4\<. V}PVC*6j=ma"VxG(g__K|F7ssUC gK0'JŠI0=͢ǽ;hzAƋO)=8{gyR<=d!$QpҿKqBrDG818< %mV9J?)}Vq5ۅf䏓jvɵY'WW2>fq[ΑA8F!$RCUC  P|! ^XD9c A/'95QF,pƂZ*˅ruەo< EyjW͌||>TƟ垾Z4berT,W+OeJFf0| 2 h)(mJpb"pdJ~*3^]8,Ύwu7~[F:⭹:lӰ06';l./˯\[`o߅Q^*j;qyLL,-1h-zD=!UibDH ?}5οgm[ rU΄2e]?~<2j;zÎ:}֊͗1aF|6\-Y_)].s]0@><$PЈOؓcT< (8ZRj =m%s(%z k✅ЏC*ȣH9BF jX|H5.6-޻k}r- Mܖƞp675/_Z|2z~;?.2ݮOn2.|f>ZNP={]W{qU W>F7{_U4ޠ^g.ϫO|x+C}b j:X7ѿu p%mI9ojᬪ mۮ K;f{@}R̡wvͧE%ܥQ{)zM1z"`,PG[D-k"&o V6T\cSԮ,oo2ӧӛqvs&V34YCMVpZAC\{UOM^ |әʮ_͋tGߵt[nv]3p: żДsӿfv1;A~ؖ@CìdGn5wZ:c+Rpej\(q :ڊ|P@CNtPWvTD%aN)k= Zn[L9QH)5K0h])%!zUq΍8Ѵ{Wǃߝ:"t 澵2z+il# eqGbʱ\ kt%M?we={^]bejG!3j1xy{kzs}[7o>f{U.y-.MުkHS7a5igiY\ iK]"WF̞MoΈRbYvP,r WaU7J#Ғ+AL7ۮ lwN^]ihfmN`3QEf!Ʃ9򅽻ٰ*^zg=ɣivZtWE}=h;\׷̖A@t+šPE5 S' 1_-_¿V Ѱr#}W{6,JVwHJ.G4k7 tjI0'eɃYRZ|;{eIyX2|j}LMs'afku+(*~MRgZ@יZkh QZkiHl p5F )-r5BLV\q r\!L~).r5BZC2+d#W+d.r*y ))r5BZ+ a.\!.չҦ?DJQ|Wc+en 9BZfS+JfH0+ )5-r5B2F )3 \!L>)U]Q0ڌB`Ol+Ĥ.WHI+g[ edxں{tOwU7ڡvv IBlE0˒5؊` gK")t+L#V4ZL%%eYq;| qE$/WHiIJg%Wh6rZ"W@kH]IJXKO0+XWH6rT.WHdʕ"eB`ɲ+U$B2$EN"WTFr7W;Z\!ź\4Jve,BcW9\Rjreeg$WY>f;tn.'q\!E\Q_SrF(j`7`.#&#ebs(_zPkMɗO@ҭzx%z׻uHH.S|t\\[[Sl%K:>ɤW(=6JJ]:%FttQS1:{Ƭ5vmpݔSϵ@&<RKsq--"uPnW-j(n#WkE# )'tre$W,FWl+:uBJ)\P 5Fd#WHkUr"W#+ SuƚlJ2F+ҷRb]QDc] (EF(W .rX| qEV$?DJU1ʕ )!7 2\!%'EF(W[IdFrۛ:R+"W Wd(Y >"{6==#& Xm]u5:2L:40QAh7ۥ"bBD3#{h HrLS! 34RSid\L3s"+$Bځjw4 k+hCvH\oHXl qE֦CJʋ\PR$WLj#W;p J.WH"W'+%5*#RR 8Fv(U1ʕg5D`i+U٬ "I~2,ru2"6#`KL6r:A(OҖx1ʕU]+f,+٬ "H> )e >b{6=;# S .WK'ՍЍRu+VM"=.LRtandtzU؞>͡t)c5TRSޢYzo55,r:ԩ*M1O,0oށ&aeq2DjsMfZ@`q- ṸZkh9yBXl qEv(C \Gf gt܍V&p,܌Q4݀ l iiR2Ujr%Vf$W|JrV\Jjs\oPRJ\Pgos]XWV'?DJ[jr ?C Վ4 i9I]R"W#+c&$#B\+h.r1s\!)1.\ӜVh+lBZ )mz2 f#H+X2Հ/2}fP\0"Bܡu&uB"W+A4#`\!n>rB.WH)\Q$Z QR\!-O~2\1 X|+*\E W~\3QH#7n.C/c1_Ukm#By;v]v A.<$KAPWc'*4[Xm Dٽjkw/`͡ZDӶ7;n͋yf ?ioۭzdm_vwI(r3 fmlz_qX帉iQ._p^vzm;aY#V:XK?O-;lH|n{x|w80TT@ǵώabB_+Sp/'ݾn1;{y>lךY@ws=]|%q-Z[";|x|B}"*+7wsed><,jwͯ3O2@0)z#N\OBρ PW|6|bEeI 5*SZ,*IY-RI[iGiyo=Wgf}=ok絼|׀f`V|YKjQf!l%D hUH6Fb7V]ߗJ1UB"PFS**ƆERsؘzc-hАrc 9#5Q&tI%͑!j-/dDj"B>yj-%J`TxU*4פnEƖb1d]Rv-47b E5KKRںfQQT@F'()S *)V!Z-p!nPd06D4Cb6m@Uk%AA-fʢrQXW=--Hݿw˒ji,CRBTC-h4Ф4IDPQɔ #0[ 8}Yet,yrh.h5Y`^E# 4b@>lsyw>fhiV:uH )Sm8jm;oB}VeDM*1ڜ1<'*RI`Ef-H2{g^7J?% MC`ke#8:Bҏk0uĀ,ڀ(̔`MYz$ZA_"$dJ֦R N*(A$ԓŬs֞Dqˌ>$%UVش+!KT>F L.9(6Q0iJ!B QȎHTddC`JR/>q*1ސ ," VTTPtB[Z CshV/u\qx(& %JR-*jlw%A2hF'-s#ژ[M`sAW\}9-fC5ehc]VП ^gFU4rR(XaҴ1k Ki) ̡2" + ٻZ(Z*b*1/H1x7% hOWHƊHf  7@ y/QAQ* e JSᯜ2EHQcd2 Vj^P 6sz&YR u9C e2aͷƃQ)j Ȅ6ɁHC#i5KdY&ysS ukJʁ AGb΂G ݄#|n?1UTzC%:3XQ%@ qPf̤CB8^APQ{Se*3!(@Hq`Q Mڳ$; JPA7/uU* qJ Sl+~ֈj{RQA}Ys?;%d0OA:U KuZ"5<* %VS@Hv'UP:o3V7_&M23u&ܢۣ=bB\"B?c( 9ih^1FPTP]a:IB0`ߛe;wvL1zSχ~ZsgnZ %cc;[]`漄i>00!ƫAy?,@OБ^J.G[K*d`1uLA-ENp`CEE ʃZR I"92ʘe(&Պ.%1G0Arѡ^F=ԭνx3$nCf6⭛s*Yȩ ՏDE}ygD%j$UT%gB]V3!Hb a|}W^u_;JK]`&Ռ&"Xj5YҵD5 Qt'e#Kqϱk7F!t&% I,ds  "QXTDd*jQW+P?q1l/Ӯe=pI4YiE ԍC7H7ӌJFBƔ(M "'4QGMEƬT-GQ"e%6f('ߌ o9-PpPaRD6HmJ F EӓNJdt %L(H H*,m@z |""(f!= ` i^/Mgc X@|N7`E^QH"VR2P$UAU;Z7H.# U@ SBT/2jR%:#!SsÌg=guS[sOҥKT.q zPc@sjg6j1sq֯@Xf=P(-Ag/ULd&!Ts'kQ?uNy~sP|P?jL1A%V$p'^vAݟs8XA¤aG9 Z&]PZ -WFկ M F:L`9ZJF9ikJ²/CI FVA:^#@.* :㽱l,1e JK}1>옄j쌍P$pԅKOq%G@B׸r^G~uE#BPQցRvTQnԫ/aVWKqj-ܮ `pP1PyHQW8$j%gP~}L1ПPrjf`.=7/nٛ2I*sh~vE_ϚޭVg&|m1K]nO/c;ZwsٟW n9[z%{)#OD WX=mP{ޜ>N] Dnfb6nۺ̶k'6"݉'$O~3# hwYwOΈe|@Ec> }@b> }@b> }@b> }@b> }@b> LqL> x|@xWvOwɱ94O'}@b> }@b> }@b> }@b> }@b> }@b~> gVr7ȣ9Rz=Gy@b> }@b> }@b> }@b> }@b> }@b>g L>D,V'J- }@`}@b> }@b> }@b> }@b> }@b> }@b>ra>8e_j[]\_//;ZֻUXnn8ȶ^-u*m'㋞m鳡wCw]håqYvbAWtwU}ܵl%>ncl/,xy,/Q#[Tl+dWo<]֛y^|/FrL_G4lţeܮӋBДӺ9GӆAi8}\ FQ^-eS@Bo hID-𶼷o_{aofXh>oL_{h PcOR yW2wv6ggxv9 D+eE(ȜTUiQ {M ݞA. ?`•tE PFyCCچd>ݡJjedRuH%WFYYϣee.41hS_%dK}b+YG%^[of'qv}־ #|2ݵ;ە;X 5( sbг7{}7;_mv$ ձpbK56p[ ;S GsjprjVwk1˧ϩ~v#OW=3BWmxgBKt#bq?2Rw"@b.;)qM RV)qHjJCk@f=Şӧ^!\QBd'W dd6n5J)u Wኜ3d*+O]%iUUR^!\q\G'*+N7W"KO$0'WI\O6p JҔ9 t$.=J2tJRj Weڕ)AUS$*Iի+M% `*KɩU5~*IY W_ \=_=?+S,| f/ͮ'.agqҾT:I)Ʈ#puWjumeV\&㬲OV'c9>]-/KO Haz~$uh {=h?|ګ5˶} yбJ|7MGuMoy@w0~֯3R Y-ub!mozՙ Jԡ0ץ0ao0ZWi*oC%b,uc~]Z+dR%ÝIdv%w-b*ޠ7Bw07Wa ?l8^Ag ![@p:O 8i6σO``E/sK)q]V5Wn9U&sKEWtd}o X/}7pcѲf؟]cW:]Lߘ9L`8+`[|㱨V@+xńڨ3fI&z;7Wh[hw?+TyilIY>nF[^Sa A?2#(j%aHcMa0uFjH>{kDYv*9+\-Vf ɀ ~Wfah$R 0/DY탒1ƖFn?6,C(e'#6Mld9.PA7Cx$7٦:mna&)tL̨@_*4hv:*5H|swH|_X{tK [R?o`+D=9 tx?uѧ9E&4ofiZ6~IT#*^v ӈ3CJm*-6ŽFd"Bш(lXHHdc2j0E4!)`H ("pZ-,x~2llmKUṶXW`:V8e]Jf7;@hW59iӝWUm59Ba"0^>jjm-i3 \roqΫ?{ʶ}eoi}1RH^iמTS}^K:c>zd)u9>y$嶕]gs쬠$ҥrՉ;%rt׋͡^+pp0X`"k"QK$m.P͍QmQ'(C7#ɉs5NM5:5s2ж. 0I?ki߭\/щO|{NC[(J()6.5}̛5u,Kzg2FMD9փaИ .1f^D]ɺP0T+6^2Uc}J! E N&x3/gg7Z,YjuDr$Iz- =y S]x{#1uW])EQHFR10kf% S >sL 3MRDr|( ap) Ó1^#2J6qRRc)"x+AZg`J.-{a "=  ?J;,&<|ECrB뱙 UӐRY<c,`i?UEk+g(* D*Ӭ LhldV l!i뜤A뜄7"iYpB!F4`H r@L;\4*"6lօ ALYYuϔaj3jX$"﵌F"X iȼPM=VnR /![j,Gtx{́dl)HyZAB{ 9t[Q4*N)U- T.(qCvY0I(J1,,VJ\fC&nC )?N1Z GO=5S}h{fV"aGkNo-:#5%@W&sY_S]VvZRfQ[M*FLʍc_4WɆ4P,ԙPXxT,\JtqY牛'?oP~xM_9bsa8t<25Υ4Z %+A֥c"BJ @J"fElhd0 3 j-$& fh/#vHH@lEt`2#v6qv#Φ6AyQ[U-1CJj6rbp Eu઩!륐!/`PY1xю Xr{mdlҩ_ND̥""ΌEnKX΄3|M)(#ɢ( B9koUHFi Eo>+"jo8e Ld$&:łKsI&9b]lF۫.Nʸ֙MKEI-.n$Dڙct[ cCS8$AV)Z1p)xM;Caxx[d9rjIl78{#~|G 'TXL$.=*H梐Oȱv/g?O2Ib-RA+8$gAC-SpJ fSp:Z)4YE8Y9ڴUrRjP:>0>==q4]~qs'E.(ű7Xc\EU91pީI易l䕉!)K' 9jPR0-x((טp*m>SC wSmenso%v@T4j$ES+(7"qHcCԤ?xDžTʡ &($VLϞ3O| ="WGy( l NI"P!,G"j'NX@=!F-䪷 Ozy$t A䭞T80̘\aBH`. #Kƒh$2M\k٬y?Kc]56/A%ԣHQnϮgw  ?BL|)'=n_9G`A:"QP輢2Ţs#ZX!ʜ!Z$-'8A~x[eł_:Iq3[.޿@;f0wLo< .&0Q }F83މ^nwwye !GÁ`8HC nZZ]g7*; _b4NnOO~S7+Dmw OevkZwޤsPljx?Lqu^{;Łw?Wvѹz?ip)=RrU0daGHqT?~8aI͓ SN;knw?K 8XnB k KMIҙ8)48Xs|[E8/~hLߤiT<^'59 l8<zvVo o6WV[ߠ|/UI1ZXſ^eVixfmg4exxBu#<;1|!]lGP;dfT c04TOJ0_>|1yQ]}!6Vjl&Wocl UN!~l('h~Wn;/٠wڔo ū#P[$\;{fɎ>.`4ݬBjxdӯY5Tڭ;N-;6~?zLsd&j~:}y˃K.;ykV 9VHCW[̜ϷO`NE 1ծ%׺{u6>~?|o7Anq [K$Ν,oO2}Mzl3<S[GmYWWb)]YknwzzYiQ8t'cz0;^MWNҼQ\`fWD ]Ucv97E.Oշ:ZW^9 y`G@qdz*A 1D LDI8'z-`{z Çhzpkcd^7,=i]?>N~-+ȡRK*Ù:d1dk#To]zT ã{YҹJd31H%!JTL`[ :q@eJ-!XVbqI'NK!ma]H^\)u Ϟ{t&}:6E@G))'!y2lCIcRE%W2^9%# -</zSro|gm^2x{qs^2on෽}'gWH%WWX Օz:zFUž^˩W( Zb7܈xVj4֧-8aN4rWyQL? +0qs9a๝0lNJJ`!($X I2O˜wt5;[ZswU }v֎/C|Bmilbo^{KmOLO}xv\)cIs@u49$"8Fx:̚qd":zDm8xO{j> dm) ^V)T!8 - :Y\b壅/.ioį_X-=gU6<7:+ t!MfS:)(&ObDszq9,'+PX`XP SIs^^|Gz;'=4C1 =`9$oO <(R&euΩvElJ.%( ǔW\6=rň6s#w;7 2l N E^k#HФHvt62ҿ`dAjD| eZ'CZt ՛Yl T̚TSUyU>8?RiE+*V^yU%tlG۰"3~2KoҤx-;Xٌͩ\,܇AX_{rԔ-l%6p#Y_oX(]sL$%E  JT1bgg) YlFTb~ys:YζQd.홦;2S7TmyH||2f-i:r@t[B.]精ԃ,5̛䖌W+uGoWzg+ o?,sW̚rjY\,,Jkܯַyŝv󱷽kO^F]7}>dP۟zw żǜ'|$E.+O;`Mp\vyySnN|ߢ*{N?}mϑ˫u ?Liۓ -oVD^Nr4,#뒎*+J1`8C&ĸ}'r v.7<ݝ}'%Ĕ4Y Rt\ۘ)DŸ,;V.rF@;Z9"!loEV`hJ>q :b;8[I !{c)X!2A$L*ؘ\qd5Ȩsbx d9Z@3CC1y#\;a paỉ*eLIt$.)K^$f6@*t/,O!,d)ut2B|Bsu3OYBֆI.ɘƈ! :ad K6YSOZ%*"'W2ﱸrkdʤLŕW2z u[3cnO9!ɖ~ǜ;oy_%wsB+BT{!n>wNX^;N'}RM@eʦ3ZXEf&BN:ɴQfq+mڻX=(vrreX.EEa@Y0`Y !&/,:LJYٕ.|9%.Aw-\0|-Weruă41kVRU8Tr**,$"9d.`HA{<6޿i:IԤ=7 }e@bQ'RhM MĢ3S"XBaT?2{9 8 q+YbR;hyɦt5W`6'0)mԲ 2$L`C$ I-e%)1ɑ:Û]B{:x"į?h< q>Ql\ Šq r+4L>IkSNDdy./7aU=l|wհR\2_s4rs<iƱ>M8+~Yh81P2LhI&щ5RrllNJ(Z\[ .=wDԍt3\cVpz!N+(>mb7O}28w"Y%z/[{EM.S4 .]Vwk*Ue.iqM.構^oTʬq ONw+qn7$4ƿ^fOxrGiblKK֖m͈͐ZdyCxRrzqz2}Yٺp~K笵[[Vꖾl䆯Z۴Ұ4G_HA#>41Xy([jhN6anmjmHʷ_/>0?gFr(-U Neč .BIϚ$ ,QUTZ(X h-cUsM3D_[zVefcRI[)H$C(<0VTK<2gkF0(nBĽsh L%qdz"xHD L$c[b)΃o'6ߕj^zBD-|: gq?=8sqўؑ G R Ew0m;S5pi Jk´/0"aRC3uEh5;:%hDuE:XgUeWUA+šR^]@ue,BvH]Om \]QW-CWW0 TW |VqzV2;jQW䇮 ^]uR ᫫TgU+]]7_B TuH]vF]3;-CWW%QWCoU_̇Tr=샙G?X ԌO#zQM4GcqDv2N[>(-c>됟棪yghT:wFv,x2lڜZ?( OGq{'i^}(:ӵvwdGSrF/9 ި!ߨ(\QhǫUT`Xy4:>(mʬX .ʥ٣r5M1Z̓4/dBU rQ1|)\ B 0U JW.TIJp-5٧K<`5e1JJ+TbCt9}zG= ٷGapQ~Z|0Babئ7wzTl6SJm.VYLjq<و\*I3`j+-_193+1i/dmhp3+G%;Dc;9@Ψv_CX_`yg3ꪠUA){VՕD%*쌺*p3 v쪠`z9Jh:03+}@}^]HuU.:|w|W.UIqWW/G]C 2֝;V*({WTW@dĮ`uWUA+>WW/P]:%EUlj/pmgUA"QWCo=> ݷapzZӁ 9uelvzTjccuwqX+ aMi+ LXlK^f$v{VV[UtزQX<39 iZFӂM.ZkZ(4=BMk^AFORp Zu%']PW@N`*] .f3ڠSוPL, :8lt%Diu%SjrZyJFWK!]1-BD礫kun屺b`Ҕf3ZoRוPIWcj&ޕc>sWMS5 Ǩj0( 7;LRוPj=j"| h]p+EHΠP磫гé2h0c>X4] .zƹzg(%3KXAm3Aы=#RrͧwR.ZoSוPt5B]Y JIg+%rѕkE0Jm&]PW X]1W\t%] F+o HW 2Z@*$߻J;ƨ+t1zWC.ZRוP:t5B]Gs]1|zWB5?#vuE Oqw]ۣtE^+)FWԳTyl'/?p$] ԶIW{=9,cLn5V3ϴ>R<i4:ms4zPk)ӤjZsAHW|:\t%RוP"L 7Ae+6.ޕEWjM]WBäb0d+69;JhI+t']PW.(\F?+9 ש\t%>ޕP⤫1ʓ^g+6g+=jaƥ+`pB"0#] g0( 6$+IWcU@]-9JZ\\^.ڝ5q,KZ藿byso8#o`p}ի;TdbMs[0ʨ }SL6|H:or>}X͋S˿ؗٯNB}(b= ~79YRs`)Zۿ /?*m:W-l ܿy\?&,|k'?ZK͎,v,ՌSpy y1m6mhkev;}Qԩoj΋Ү<XA-~ݦ}ʠ^1xl66K}BɳY<J6Dm7_׳zh8*~%_C|t*n½vCn}b,{h*tgtևӝ1@Ó(.̓Qۈ_Q䮈ۖaXe<-?8~ ŧ]ccPd6_t>07WA#ܡݫ#Fe=& 뽄ٟ7ɿR慫nzvUG۫8w\gFv&\sƞgXjPl?7 '\+$Û5ǟHtV?HgmG:}# mHG:r;sF:mtG:;O:l8AVzZ*}A^;g 9^.iq:Hi z?ټwUhCg7ܵ j}WU}9*2EYmMfYr"6[\Շ]}001&1o.8g(W*Kuף{(mq#m{R}ǔ,toO%;dTmqb3ֵZS~pPMLoQnjk9c,KR~s m!QWNsv` ugڞX ݮmbT3ZL-.\AS BFfL-hxoAZsѕjIW#ԕ2ҕc>@iHٙԘt5]Yیt; %`jM)zWcԕӊtNsWN?)]9y+. S:7ƨ+9>sҕFWZu%SjB@9Mc{\6H\t%>Pvt5]MO>zP]1QJp!ZJt5B]qe`PM>wb.ZJ(QOlt=Nvgpwiqt5 Ww5w5ҥu<IW{=yӹj~7.d?ŭSMPWgi~p6|4-Vitz5M0*d+ť\t%ޤ+IWcWN늁(] \t%֧+taued+|zW-nEWL+fuXV^g+תlt%:P;j9銁lt%{}XcQw6;B9t{>UM8jAI G^w5X t']]b+yRӹoyRRy&\tσ>ڄ A5de%TI58svJp+%24w5F]!3銁3w%Z+5J(t5B] {#]13f3.>ޕPNSퟑLϢ7*hR|t] =whBFaLڿ|4ZUz;m=$^ U,_s7i ŲM|Yr迿\IFI>'LÙg@R;Bl.9{Gٴ.&EhI޺0%*5.#l]tʅt`ѕzEWB\bJ̤ʐB3ҕS>cw *렆aFO,n3ҕ|zWB.bZ2'']=9] 0ltŸOJh] qF+O +vJp}6SB0j RFb`m6\csѕka~jHT;?ǹRҕB6ZAL!IWϤ+RW0']9 .]yCMQE}tmcs]1;GgyW/.$lM䤵_u捳W'|MPOgcE}AbP7ԾK"ޭP7wu=\zK[?%ŋEױQ^~RC5m/bɱr>o{/۔mߢ-ؠ| ]{x/z\8޴|*{_|T➪Ǖn71;{dlY|ˁx[N7uy]Ӧn?щV~AP*;Ogmٲ8?Go#gN>P{%'+;[M|'Ϙ=dUŢj?^GYAu64h|0UtP|cN !/!/r oޭo4f LJh/+nOu:|_wWU-^7J_CS s[TEgtx[ATXpJ^wk뇪@E,,ڨ B1*Vpi(*]VF7XmPU>^® 'ҁ? @_VźQ".sM@" Zo* pJNs!z'V;aw]8܄hnG{!p#UͭKU@6aj ZEWG%w zlCZ,mQSJs/jA} Wza|ÿjoM 4 DmE]9q:_J/]koG+۪C@8@]lIdb qMc=դ(VٲNID5[n{Nu*H4y vy9<+fP57.WR̨Z 1d a! J B@ t,:\"B(,t0vZffaKRTQ6% L+* q+9nLWn_@ibdKhΨD\`(  UTjԖU= 4|A_*BZ!uLD)AR"ܣ.PD PQ$F^,.zEB}UTXQx %ŘTY)GDDn eXxኵ&AB3W])4a@g:c! 6Gvۊy2mƬ*$b |,5j90cyA.mc.Ζ Ӷ ~<ׂ[VP #jYS:nSja$%vR"X8piUڃ[АJ'. rS40n<&C8bgPpB|Q cA1JP$bDM W$2+Bi,h\h= A)حLQx(nEe06 낅2YЩ֏/X>ON\ŸӬ(aH5SRe%)$ CddpƒDon~l͸sR`QD6 ^{@#@ zvm^?C[>*AVpBl Ze}t0B(5a2D5"=V An!@*(S`qnj@pAs^ϠB.ds V53chdEr[< th &e#s4HqإF uF gh P e v7H-24\ɂ_,p0",0Ljt#Ϣb@r",U`k6֜ÿ`y!eќq,TRhěW i(U9u{YEae +ԿQn%m}/b"kvͲ.y*#0^ \n A`p 6nu@i9'<̺5r-暦L@Y@ԵEWP7]pGL-FT@`M1;5,zO 5$޵繨)@UGCkm#r< hpoh _6A z XRcF5Jrclsm]GER(ل*XA(H`!H:P# wB)W VC[ M@ z"V"bʅs :Pbч?$ Yѭ dZة !,JAJQM5zYU3X!27jâHY=+NFh pNiu)Fndj7kԪV gF0(I<(2SLGjJҸh Z`pQǝr5WhSMg*R3GQUq`h`rdС 5@+hX4SQ#j|vU{ ̐A,2 8' uK pV;ͨ9rIbUt-D 0K1:FflR." K]p9* r^RרvE"C Tv70&`j?nze?b)n\,V륀0Hu*EN^K>Cu0d B :SJ$d}Nj[`G[[L~)zd{pfv Y[ēdH6C7e>:bw='?/+d:]qa_1Ï=HQ翷e-۝V>yw'G~+f| 6]?CMW~ UK wBE ꎹN_=X@%?Lrq0> D+~S$1!D> |@"D> |@"D> |@"D> |@"D> |@"D> $x!p|@kv8>&$TZGI|@"D> |@"D> |@"D> |@"D> |@"D> |@"HKa!i!\h{T|@ds4D> |@"D> |@"D> |@"D> |@"D> |@"D> |@dt!Z!\nh{Ԓ|@89AD> |@"D> |@"D> |@"D> |@"D> |@"D> |@]X/qao&/~ewif9sRr~g[Z@0|?mn[ĶږOc[S(e[Rw-)-ݻA*v/:[Mg%} <]7G"dzy~Z>I@'/ڥ?`fHB8;M.@׼ο{'N9eO餤Jk\uɟ 7O+kQm\WJwaH}[܃zP^ Ƕ}ѫ*^yUA| d$uv*:)el3-!׮yBI(e@ So{vg=)ڼF~umazxzgX&z-2D>ɿ1oۯ_N/VK3znD&+u+ì70w{?7g^)~@S XÙZh*y(S -Z#}jEiM-<én?\!`U WCrW-ʝM\I,W-`eZ`^ܴhwjQ:Ip Jqk!gFN؃:vբ5{ϮZN\=C҂s#6P Zpր3Op e,; 2ͅZ`[;\(5WVX Zp \h](=]=GrsHp-;`W-v<Z<4+8j&kfnh_-^bIpub[λ[? \-}[ȠW]:ž+ͣyt|1YJ~xN`w xܞ$ _yHPm>.p]6 lvxut]!o} V3.(wežg|qZͪԔ:ڙG]tBvmO;倻z}ėW'8xLlŚfw96kwCDl萯ES/-^WG~=U|?q0G#>zrtRlio^^:FN!ng{ nq|tMZkۖokWt12+K"ȶaٳ"d%]ugmot"]ONK/SCն;t(ט{n:WHG5,aY*k|tK[݈._֑Xޕn{7޵~}*踦ߕwg|]ql]qW u۹˛9Y,W?{ W>}}a$Օ,͍1Q|>>hK(*vFȜ($#W 9ȃwy8Ds}&d]0Cf^;Z?‡|`<}@of/+pݱ͌-qunNo;AtyK_8}xPJX u9̋vhxêFB]n)en-շ:sD-Zitu29|ǓsbfΝoixwUsqڕtKTK{|ݛ]XlBdh'O|u|D˥Y&F#8~Mn~7ĭ bU]A/h7Yޞ# 1ta^WއQA-Kz_Vˣ˥/ɩΟ`@iR䏳K?74e9Pw~.&2rmB0uG 0+hoUJJ> Q)üyGw?NgR*(lF%tJ(%кN;/ ~LP8[k%:)d _)bB܏]7du0öz$HGp0YcvnT.s4팬vlW4<$MI>;yzpvz}tqj+W3V.+ EWOn,A\YR135bh3FieTj+ u;m$k00 VUis[Q99؇Bc Iat;AEUɩxHbAvvqc=.VٞwW|'bI;/z(M;l{hzT{oorW.NŮ=0#wnk 0FE:2%C%k%ѧy|5x:y=ԟe]=*BK$~#;Gr(#tֲt: IcbdH4=sQAAs?˳pаb]VzY,cYRd ٻ6n$Wh6T/6l4kdH*_dIֈ J=Ds L@wR][ȳU`q:٢JҚm3hHBMN9-l`@ DI9# jkdBrx8n}>=4"ed,K]/,⺟r}s}ۮ)B@!sT2`grccق'ˇp$3R.I99))iѓlSf9>1Eҍm8? $d)V8z4*˶"]@3AKY#=do~EC=>Gi,w8ȷp9[SW-] B1 XpoiIy]66vJ6&N'e *Fo%9u҅JQiLܼyvbWJNj|2?EWӰ?m]fu޾4,>rVP@AG(Oߪ"5OU@aRz1T$ +66&?Xe፟lZ@,RNj;CpYE;a' 3mږ$2EDtrV-Rl*8ЃB&QB*)4T 뺜Cv1"SԠb3"R*EM.Jx]8S;?\y>)Sm,X(i8c 1YHSyBv [R PKmBOT@rud#qRFs6gՍEӰ^n_ d dJ\ d=|mZ_WYchWڪNUok{zlSܫO?|ɖ3 %fi# Z6 TГPv`V2J-#[WISQ z2:.e 9a)dX Ff#Wi4P,,<*^JTmuQqOj:3vFVeota2/ LSHi)m-܆PXR 0(y*FdMeYNN v},Pfacn&~Ďib j7ӎQ[-ŀS㣱锲Z9(}s1]5l Bmla рfhR1֤,,,G("TLTi 16g?] 'PDƈ"xD4RR!V%t^+ߑV\ɤNoVKӲT)l2'-S&uy3q#̯:2..qZg3-9ec\.xw)d D` ("A8 2)2:c2[J91pTv1vv{_U~#>mDE7k?Q*~G]ISgBt.2%N A_gAk7JxNY֙{Vcp (0{ 3:;T+5(Rb`.9Q082 Ic2w^>Wvw5>zz5Y[q1P0  dZR!bRBGA7KP0H !jVF8/D*Z.ޮ7`5t^b MhK'1nGW뒙<\[0js;:㑁'+ ;#$!y 5 6Wu$L:@at3[=3 GypIY2"bs0(-F#*$J-t=P:o`R8^>1r<{71;Ak 2$XbQd.SI xp!tYUnB7vxUs_ujc-C%Cz)jٍ~aو/jFv)Xv ZMyS^XJbԨ NY Ah-9ɿ_ =] M#Ne݂mcVW?|?uZG:P%ކOry/% ! fTggf4qx3|6Myq6MkfZn|3;1_V3MWg?h_/Y$-//_kx!ӟi~-:cNr j !MǓ<Xy:rD.F..i7Uԙ; v D'I,)"8]'g?_,*s;ΞK?>[kՃ?j0?W}\G7YhUj\up6jP׳O??F8YH&֞э6jګx<-^|3z^c<{*?-*1*UAD.2]e_VZUW7m{f^L-i/T?j=7h:}d;2w>츛Y 2b9ީ.u7GXO#&_EW}!7Vjf?ߗrBK,;li n8Άru6i]LG?>dx~d.P;$ںW';7.yt׫߾mxddLڶگ{R+.;6~ƻ ˍ&7\w`i\se bdX:th;rY˻'Man *cU>Z2PDJ(4(rEZK{ )LC@iEZ*#m5! 紫OhLLS+|g}.&YAMsN'*& m@j 2.槒L7{%߶Sݩ7zҐqD׻PMVG_m;4&ݻ} ۔ke..E}W%=dtT[R9k~@ }FyuH:ެCi-(fV;12qLYM*i~#ԲE^ZI_kFx0D-F0k,8WLcdǔ1qQN)cZ=]n?|l/͇G,__ ({o2yd$UdqT)m! C^ݲ/vWr4ܞ M/":7X5~;7O߀\& 竪=.Lx\ӓ9o$g&o;:DbFK"j?x5cy5^ >WCdJ Ҟ@JG) ։s(&Sem *e HL Cfɘާ-X IA#"cȵ>8M>ҝ ?<$hAJ|<Ip>OD}$27>o,uJ} l"[JfH @N5 ֔.0FK\T JϬ'wfM+Tg!;R:r$ FYOGo/Df?H '6ĆS"^STx n?ԱoR.Vs/$ݞ} b)7)> jްF@ )\kE"C!2hd`70YP lTDhlo/٪T,xx󑧷3P|"R a\sr[`,^]X%X)tVyi/rSAa=[xf)[wT[zg0Yk8tNR9)jF흠A9RjyX4qKHm62cAJfɤV=Ȭ R&۔'ȾY8t/d%^)W%6K<łr^}-}tNrZQRu*$zyl&RKq ]nj |. 6ȼ>RPk#L0!aѱ 85kS2ʍ* `"ɻ`"䢤AFu"C(Xa,(x(o4]+@ D$KTH҇4z56ܻP@пֱ ޭ`X39`mY0첿?8Kw_]oGWef0iC1Q`fA` ZX"rYuhY)lÑWŞ zWK1gZ|Bcc׻~i&gG?J=Ul;\u?pT|uTkWG?7nīv#@fq .;ln3,ڹsf($Na^T K02%g"uB+i;f U|XDWq"{h^[l\Սho>poKk  zf?ԗ_eꏟX0ݬdLoIMP@vrd%^~){.ڟg0[aPmxy^R-˜#_tzP0 t^4g&0kZ7>Oz dwN Nd,BEk[xeaИe `@3 c;N ~Ӄ<HEyE݁&H$ևX0M >c mkPaLq MɟsC፱Y/VM|ےVi-9ڑG/N єjDKZ|:|+"c-؇':?-3 bY( JJ*m$) /#!!N3؈/ÿ?Y?RJ$.!ٔ殰 _ 1ƆRѨ'Җ'sx" qD5*"JH)1ŕ$E*_{:Y΃I~B:eFBy7giwL?4+nV107^iVƱL17)>ar|)&ٻ}UN)af%G` ?Gc\cjO{<5䑏* ۙ\wo^Ϸߞߜ9󳿞}-:F1`WBCi3z_%*ʛ͍XhZ|r6GeՆvkk58Lh~P_rՄ].a >L2?D3Ir *Tb@x 8BVw]i>gO͏ \i "DjZǣ{K(t,x)$%3]+攗W]T:t,㠘MupLq:^ fj{.|rq쥣=IG9ۗprRu.Q9LaU#)r U`G"X'b/PxM݇ܓ/,ۗI*/c,cPLQ-pX h{-1Ά>G/bI4໺'?m ʕKE\n6'+ObGo߿xY.:!^> Do.}IKBta4EaBEEf:0VemnCQɒaC!(Gi(,"RV(/ue,wVJlg F^. ٖQP(% 1\Y J"Φ7(yspZWD̷WHЏogoo*鮄zJUf2>|BWxiO\*QzpZ՛1T\ 2\HfB"e7B&9@FX_A)F>5K$9[$|d\@_1>@e-v ~D (}(~nZGtkA`0c Psʼvn[QT cI( Oü0 ـyuY @)v!.Hl.n8:u!o+pAUoNUm^DYXr+dPe18*!:UzX B2XBtgAl+H*th:]!J#z:@RF<&CWT+% $!d d6Vt^B=] ]iE_\[UB\$CWWT JuBC+&!B2|d+D;oQҕU+LJpO0&]+D8m*uRkv:&pཫ[헮ڡe{v(;AW]B)tpYgp=a(Ft)b 6>p qU|<J^m2xnbL С&8J౾׽YQz_]3Xh2~YSsclBJ%‹|:,7i|"N2ߢ2U6N~91O ZUfxDvg&if C/sXCDEd473~ܧY7XS7,g"#TS³?7x4 Љ0&fȜ;*# VJmjYJH@1 E:y*D+YU R^pYL;$CWwF՝+Dٵs枮vBW\V U&ז0ҕ0T'EWd R+D+:Ό(J>[+lH2tUMFE Q2ҕ2y}4!w|vh:]!JZt2-K.Tm[+D6+DKWIW^P$DWtT7CD}CDxOWHW hBtJ%2 ZyBzJX6m9Q+dJ1ݕnAWv=%ak~QJNCTכblJ4W M\Mp+4TT4}4͉%DWX ]!}o=]"] 9IIB" j ]Z/{v({ JYJtHIF!Jz:@$DW힥sĀpKTt(-JSIL$&BZBWV)u =]]j8IIB"B2՝W#J{:@NM .++7CJet]5$vU=oۡ{]Bj7-tB0LzVkk:y](D&*cT*f̚LagH(G^ Uw2"L0G9+9Le J!u邍Ɉ:3F ,AtGWj3, `͒, \R,PȾڡqY؉f1tGV4BFt=] ]qN8MIBB%CWW&snhu+DٵtR=]턮)M$J&͔+DiHOWHW+ӀҕDtb.<w*Pej'tF&DWt6%Dݧ+D)lOWHWZpUBtLAD"Еq]!`]'eW#Jޫ¤d%7*Bv=kv=&˷ljowvhۡ욪ݶ+ճ]AjH^>ųy-/͠twTp"TDYuE!(! B}4,O7e&,|VТ^l13+t5՗Ko/!¹ "E^2K( O9"Bk|֡0;Tc 8jJG+HT$DY%D)X/!$DW؈t ZjR+&%!+@I16{:P]`IY2tpu2Dt=] ] 3] EiBҕmBWKu0?Q ޅӶ dN[=] ])% ѕR=Y;<BvꥫC+ c+͜]+D*thy QJҕT ]!kU]+@<pʢcJvlh2tpm2'VtȖ Qꫡ+NzN0eKӇ)[ի+Z֠{[04<GK.:4cFiGo3N{2J1W^LjV댴bM-=k[=hvOI'ZX Nb[gz_sd^7~؝.Ȟda'0[ؖG'3俟nYn[Ӵ$jb?ɺ]JC+RDtE^Bɻ:DҚ+n"+D4tEp5 ޻"6݈"]-<h+B"jlH&:zu"BOWҲpt *v*m4tpf3hw^hsB_]AGÐq}\m@Pb|u5&#ܢ%5/eq;Os|y8RWw(vOme#vԥыu?Pv z6zu@ ༝RCQו8oǷ?zlYo #@cǽqe1- ?|2_L&"œj2~wڊ5vT[˖LZ˟V?{We8)=w=QeJus.5BRg79D_{ oӆ;Z3mt3Ӟngۣɶ6j4[374gk&#0/rh^p9nf[=!1r7xΐ|Do3TsMk.Owҏbugg./-KœngNZ?:iX!Ȕ : Jh67VhӞtYM?sw |-E+%ku@j8k[>wu5)˲'Bb$eon/̇q~~(?ݖYz~qI68ѯ٬xtqo*'Oҝu}|;]׬'^'n-z>#%=y\'U8*\|y\+ut7 \ ,iMcM `4g\AȆd |dv/9$jJN/=Cw xűeW*X/>xt%myY3r3OkaBFËg^?H++*:po~$X$j.kt==s/q ~2=e9Wc9L(ʝQ ϱHr;̆h'_#*OzAX4]4g^e v%IX4,Fr;%MGIprSoܢ?rA}ʚ@(&W[Sxض`+΅Bcmi1+ީmpT( `D7~ji=Vjڳ/D[AVh0ztM=d7L~hrH׳v9iW[yۆ#N\ڞoCv$ê,RsIgf2b8Jܺˊ H",jy ?%܏w!Ƞ0;tԹm@;Zk:Nrn(r{/32;iDz˚jK\u=9 *RUvؕ-`Z!݉mKES[VQ"..WƟ좉>\f=vE`Arxu̫9v9e!sCVg,\~ŭb$jJ#ϲw;iO&݀)?y2dӁ]VV*Z)s3^pe) U+\VdT&2Ū }yȡ07iTC..%k5$ NXk$Hv.NTRg,|GSTOVKzME|SV1RiHQ"jexnhVO D`vvdZ.+R}ְ7^t*-z1z6>JVL$oTl<>+nP:kK%Z'\~~YBştIKzSqf:_٢B.uiӬ0i]n3 8{*,W_atx; ё$3?J'Kʤh/GْfCszӨg >A7FIޱ\ϚYEL6 Q/UR["(3$Piۤ!!s6mUWEqd(̩zVY`V X!ε. Ll4(cv*픗:-܇Iubp?9eQvM?R_慠qUMQUPҗϿha('˯GuW3sy7derTh0*U>ao>vJȅ~hf%cXs4Ia)'FKL(~Zf) ls)UAئ"]Es1- (`^:Y;CX\x XԦ(=u?6ag X<\EeR$q:b)<RU/3S\A#d,VDzƫ],.t] OXN OAk PKn[/ueJ+v߻&S#~XbfOGʖ=l >Fԅ@e'`eC|&r iyry/W袨9DR5XȾQ\ڲȰ@W[W\\Wy hd0# a,x+ N/Qy7W(aӊZKw.771'JX 2yDɵ:\iQpVYd!hKʲlBߑv#f71ud8<vwԧQ KD̵XSP)OỏUQq+gZi=ߠp{(vڭk8Ucf5_Gy%pP">8v1ւ&gpoF;*!٬mc"U{I [ݡ?:˦7,^Ib<"pA8 X7?J\h4K-f#ƫ $ x첼鋫ZVCΔkZ_jQYFgjs~_kEѸ*̪`ljn{V5xӿUc) @RċLKŔh :UKJ+7^+258jO{VԋRhܑ\u!`_;T{ ۩Tr,<A cZ˄(dQ@KyJ]hdaV2cX BZ+S{C7/{'}f:Ms=?*)n5 9c58@r"T$[KE 1ĕk[#N7M5"lݟ+iAsM i$w|ނ8B(6Mœ. cTǷjٲtdYnkLR `.I`M  dAz Å\S4@hS?ͪ7EKFhUצ qeGe|VNeS>mGYA։JhO0+c_k2sҲti/CQbWlחX-cAr̳i~L*r-_*/efm*b2Y,$.Y?{W8Jv2<؇t?,yd\,GoR,E*2SL2#+##,:-: 921*+vn;UvAKEUqPE]Zs-\gq>H7f>Iˁ}/} i|NJzd?atM-ϓ$ OAS-W\~RN~^˯[ Y}҅yR\t-ߊ1ד|F/Ȇi袣ˮ" i'6-qNYMK46z>+my'.D}#__=-?~j7EKձZі0#|NDl:fE9,dϓK_:Pg>V-?풙z^N6~)g/-Go;Sb'Uм%V0hڼ/.-ՐYYnv E'1ZtbLD/Ketڴa`vϲo6,H~gI|^u,( oo'-tmI^#вvҀ+IJ(-0(# tb?p|@>wh0iSØQԓ&mHP2Ӫ49)Op@m=p`&ƩݧOۍq6(zR}#}@ Q\A&{-. *v E'c",dB-y8 48x!ǽI}&id0Nf @@#D[(,T{@emX+8#쫼}l&E/B%^_{b;]5񄀼z<r-bkJҋPjxǵu᙭V 2[_P<}4%tKFp' _X:gW; ̥2r;z֘n]GPoUr$HNj?Ϋ vhZ}#YNKGq~z*R1tjn8ϫă:Qxoo(4 /9J$2=mB``1Xf~]@TC; =)GlHQ,(&Ӳ+M]2^ͦ ;G?ɑh^Z;^u *$])T:<x9C["e KCӯbShhPK5 ݅yS73282A] =^=jfعb%?ўa5zW]襮^͵Я&Gw5s2pE%fE7C7QJ6}FMjwTR*G7Ģ;Ia؛o,bՊJ5WNsu^"Vd3i"~W{F @s'eU8yU- PZk2PI0aӒ_ӝ;+ʙui-8dPF`O(rl[60\ԫ"?`T$+]NvȜtdXCܡa2y;yIGI%LQĮwY`baJn=5Ԙ(!CR#h""[mÒ$DaԁZ'dkrƲȳ{ܼ%^3v~yt$ĩsw0I ,qTTDB5azP-N*(9~MC L̰fO?jՏ$BxtbJ,C{rHv!MV4GPy~IxhXJil]?lN4PSw;c2 '|Κq0!L`]|A/ FE;T.g#\ڢ^g7_N2Rsj5} NJ Y9MR+__/Jvzփ{~%ZvAou*v:CnE)`{pQj<ןS 'dWUgW3(w?]kO:ƕ3,8OlAJ=UWΠޗ߷xe*L$W^,2tba{$| j џHp$?P}_U cj]RIۉ^|VcU ~FWc.)4L$w;ǸvZqky߉$  3K]h^ zA*~ 4~]~ NBA2ҟ"=EA`QfM8{1H{ {]Sj4C_" ,"uWΥP(Iwfl4p=oߝ[UcvkeݶWv%XۓDɤOI(ɛw6M"7+3"_Hmu?ۯˉc@+Xp8Yv4߲Z@Jvl<tl7:fK>p  pT3 QfZEd_ M_E;ϵ ' ơ'v"^6N >.0^^t Z\xGu{ yw^pS VJZfWvU(f t(|AU ֩CN ?*~-ti=KTʝ*{\25 SG^Ǐ _ wX9&g`0Dyʼn*Πa1'1ܿd?cac!dkf1enBSW1؝neqS;BcY O(1RKoE%onQ+34ynJJ+s)܎+"UwtiJ 6b =ꬷzE~LZ6v|d$0#㽼ڟkmiDa(>?ch":FxJ$!s$d26p.$czeʜ+K`(Yҿb6G p[.$m|\B=3 wXo⯧AEk%Y0>Et'oՎ7_b2u+B{P)<:[R{8EI n&qTׯq=/5f͓0r7I>TZ=@9Y;hm5hRTڅJ2je虙0R$Q ҉M&UQ",GSuy4<'.r!jB rj-F|z2>z6 FwXJE p;&LR).p )^I ܖ>{X'_mEs) ocr0 Y323q+?L=JPHJO.l dNR @XpPo(5($_=/llTYT&QKΧJvMxn@(Ω;0è*ȝ2n]s!f*܏ n.V}N.'gWf1g˦]Xgi%DH;#u]~ ժ5Ү4ꞝ2#]{Cx\F8R *?>9>IL܉&GD:J*~t{k%XEO&,%a ўM3ZtDTt<|UdR_=SE盿dhi9R•K^Hə{Gh-uGbWU.>& Ig-5ˏ۞T14BWwv<!Fg6R=]LA%ImXe>L緺^ .nI O8DR8L;V߫IA+q'G㻔CJU>5Q 50Sj&ך??$mxb)T볩6yG'Vhn[a1JD1s"|Hic0k9J3ڹn$vTx֎jV_֩+0sWW݉;H(&9=Om*[GЃSU wSֳ⤃hNTnтr;Rf#c n?a tV'w3\AdGXOj-f2OܔXN7a}tf&trvUӳp~*( %,r&|:Sd 秧-M}Y :8S/a?T 9G`:s@tgAzLDP|Asu2|zGTP.-bLz[,:6M2jpf<祁hVj'+48hm{XV{; ɇ<^G䮁Ar7"` }ab S)yk#  .LIw[@y^umpXx5h0Xa"' 蔷O6 \ꯣPYm((dV6 lhip).?DLHbsT}cJIB b,mu4,GSګr?~`fC|/f|k?eEBU ZXN7/0BPQD j8Vs 3< LI INیgg'Ip ز ҔA(p4WqX^ .Y&6@줚ۢxz $l\0X1BE%B!( rm@ )!$7teڇzhB"T\T+:Q02P|O( l$sZ`V0K vr:-PNKCePNm!ɱ#~e`".B`".* #9aZcJ!# $v@H|AX q ~r,l;t;nO*|C/Ԭ`6OQ}`)tR!ȮYwjZw7x#H1)lDkrI&sZ5/<[.z4qN΢ep (ᓟLiI, 87FSX*pbX8sD-F Vf0l")QsTy ƆJK/8ɓ0)Aa8[mL/F nSR۶8iَ&䂔dRrcL9 ls %cw1IY~ Li4q6N9=2k4ڙ* P/_vW~?t5ld@o|2pp,C"H#Y!A%)τ`kz< y(ծa%#3I>" _bjYbӺ=,P?q:&ϵ:## as א(Usq}hkGc{d@9S^X5& XP_* pJe5? fLq~ɻjkT#V>IBQdV-:i %bl%Ήr XS*i,e\`)] o6`#mjTn6$c.fBdƋ!9S/&DӷץX˹l+pt?lrB?9SajǕ7o'g74Aio43 ]^ތK}VY~㾟`:StwBkg,f hwוx@’( "r>NC\W?$;f DFLѺ䉙gܘJ&sfJE9-Zri[e3 1B̵o24  LAΔyFL!rAU\ t*)oÛQ@y^ƌ;l>d[+p*?koMoFc\`kZڪP7 P-m`:elaE"jΉRs06u\߆n sؙpǪwM`ѻH=rj. #dE:r? -f)}62T:JC!5)beBS` gJe [}XH/]H uv]5t-b+^C 1"p X ꭗlhorZc\~ lЊX3ٖ7U^dyU2@إNk@B1x4h1,;eށa#|9gqm2Z&9RyBY %xtc )Uh'CI#})MaA h\VGb|X˯ `w#.<$:cs h0!4u5)U?9Lr b$jFX)-$L`Ntp5p=0' /7"C31ƔİQu/9\eԄTkܭu|7it]ݛC)2Յ?e1o/Jue^Wr7 F9X<{zivPU"hxfsG\pE{ͩϳ]3,.þ XTs<)0zQү$ `z<\W!!;rMV9c e&qTȴVH -O*QY 2xu;l.Kyqңl{T4Jnh]ESӇ99oMzPQ|)i&#%Q"j`N>Gw۠(ۮqT~wu{t;,OTͮ<Ԋ lYӅS:`D&屄50>e<תkkNX3N'ADhϺQ-}$z`W7$a`5i F\NAɦw\g(*8Rlbȯ/Y<8i>Sc;~;b?`밇KހۢZj`Wә`#G FzWd"ˍ=,$(&(2gAd}oJ(&%, G9*\(UW>-9JIVW7S@)jqRXV*-|=% {A BŨP2A^gJz6_!Q5&x@ȶܚ,ն{JSSDRjY_ gN£5y7?gIGL& k%\\7Wzަ ˉsŽv o<@5J ~F$X>DY fiT̋Z w Vܰ@Fεy&pykCA}7,PyÑ]HM]]6-2d@D7VwXqFe_緄;ƱZ>-^4ӟ]ӫ:h-+h5U\~\j){d]p')ZM`{~?}vy?gT|`{uX:\AoᝂQ,.tzƉ1gݓ7=^Hvn&T%2o6Tת+OpX;"^p> =S)u&H@ 1b*!ǦI8>!y-*|T 4ӻjT0Q6V_I80z2gת׏ Tlf9!*YܼW|G#:߻,g]< Ӛ*4*뺮Ԃh~u-VjGU73yNϣc-1``0Gkd- qr9Q[s؞vЛnr+h2 CT2bO x%f8'+ˡDMuoB!K67yYAEUioNwzhR#Ouhe(D"Ȥ Cc*H:&(70+n/fG]K{ ϳ?*,0M`݆fߝ^q+C\E ;Fi8-\5Ge)yk>cEPb @%0"u" AJt4 C)e ܢ+_7sg3Vi&4@ (bX(BqKy6U+jPq(~g`Qu^tx&P/$wmwJT1Du(X븄KCxkH & &H(ߥ5]DB1;2|6(LĜS'4qTRf-QHqf-"Tj^=µE5hf397yH9Za?* *ɬr⊗1ok*]"%ħHqgNq Px2ZmM31踟1(2 @I.2Q)'Et)9P4( F D nfO-,~}ȗ@y|NV^y4i(~{&JԲU kl?$Å\ o_g}'vM9OLxܳ]@RZM)ϑF!FɰD=x"IclYVH>4;| dH/yEV d c\׫hiu*1&L$2d2\i˘WU`vEe ̂h \ ^% *0Y@)r 6~xX %Tks`Ę`p+Sa yf/ypFmA]0jV".Ѫ@mtRLXUI@.cX3Aa6ik%lH9OCW"8?tUaOqVwG/>bMd' ZPUbUOFCB5ht>RUE+*OrRO]0_ 3ۭnR^\ʎ誵H\ӑUghgR-蚒{C@U<%hR^Q6TQ7]vW-ٷ)zUmj宣r]#(ӡKgipc/@cTfe=WUME3"l(sU#cZ2oe=, i}_D"mUi0WWaD4>5EBuX WǖA䕜[^gVoYɪS 0"u03+D:V؂s[`r_dF_&#i/>7lFNwADTP47;#+ԱPN n#$YJ`kgQb'G*<O{sg֛dnA~~u:R5zPG?_t/eD#p)2J(E1|:XƏ;0ĥ%S8qWs@l>5μy6׿o4fF <ݠ (ao,9z/`[JͰcK/ƃOwG5?$N=[>W~A{\j^.ckyV2.ywAP')ZM,~0~D.O#882 zN`vĽӏ>#(#Kf7.?p{41֛}2AwHt*/Z}}pF0̛t8{->Ʋ}m q[ΆYXЯkk< j^ ^Lwu+n3t{lf}5%<,'T ]8 ',2,(tdH"S"J%"&6&Nw;G,eHzKh@ĈX! ACNqo ;]TBHlCVm]-Pq}>:Lh,ңt֪bmדݟВ.bɧA%mLT|$Ƅ+B(3*[HO|H5PB834H<Ⱥ/4adS7½!DmSfyPn2Nt(b L(-b qBG],\p 1\}4_#pDyPHq9e_3Qf!K0(TFm,\); NW2rFay&Yh k4S18=hq pMVJJ^ m 6pp4M^گt<99w[PPle][Υ~ d^c:ߦMi@dzzfHeZ%)ç`Fʌj2j@zì[Pa`NDt7 d+X ~q e): a.iS藉FJC1a@e2=etnL2-*G(SL=FqA)L xWt$7-FD-ОoN3ՃreZxm ܦl71&*^x}}AhYb^AO4)bRX"Db1Qʩd"/q[?VڤRhchn{<ÉUǞ,ȉMyzv 82y~nY-;94WCAtf DXm;E&E,ʃƛW$XF.ԅr;rC WjJð%&!Ш/e @ƍRբf1LVcȠHG̠e𢱕 rN9Њ k8B&{H->ȼ/PgCыz4]WuZzxTm9n'=.'X WR=J[[']gaj9+wޛ]s yeU^7IQjR_6[@%|R4?+u`] D}QZ H Ɣ"RT c SQkCrōhl9|f?&<43ZqW~͚qz Ex +)i Bݲ/;Z+[xbJcl"UiQq{JLrD 1, uk|aG6(v[T3Y*T#-s >x}]o~T uMuxwg4jD) 浱v^Z큠fme~4{;N UY_Q9YUW~xa9m)—WeƂ k]{3l+c:bŘtyFJjEHã|aB8ًr6ZK e>:4TFeJ'\(XWPK;+ Gb])4:דrWW<CV o~S^Xven'tyL6PtG_.W@jʁa0\A:WR%M0qW{8P\&"]#{ OOs-Gᳪ#n ǘQƳIpVHC6GCPV0OGSio~z >^:RGa$Z d\ ]CLeȢ<^?|1 Ăz`:?WYbSXɛ`8'Mܽ3V`2 -XJ~@1(0o hJ[ ҂H_{ 0L >N&ɻ`c {n-:Wrz-ՒO`׳L3l5W3<$Y_ .,CKâ:HczI_!dÛ";=00v*/MRɦzd;"QREIT5jʨȌq!U+8ANۈ7ghqV xހD\e3Y ?A "4X<D;;)YT:cWڡ[\VA!!؎Sg0wUY􉸉$EvAkm4MKxO^lKppR(#gv~WkM݊͟eݍY8KWㆍeB>Y.~+Y2.N+\AN/~?9ՀbrNȴc2ra 8 خHͼh2l;"!r"Ĕ;%ɕd[qؤpK iRa`34|OъV89d]g2o{]綫]ѝ_]}k21R!E'ƠZ3n))bǭ6 Ja<Z#nkA 6Ru&W خ/ڗ׺ku>\STt5s6NvI`xI !R{K@T03,ܰ*;?rw}yVnB{a7v,Qrn9 qzUHx.x (RFXWx(eBP\* C x 7:i=h4>JOݞW\g.i=V,ڍ&O {f5h%5T~!+ڷX`({Sވz!+[ewk,V:D~cD*|K4^len$Ԓ-W *qOSo&pc/,έnjm9f{HѕؼW,@͍4q>kh/'Hs($5+g54z뢞 k oMY`b`0 Y\XXCF0vO91u:d*^y4h;0 M)YX)G 7Jʂ6 _k *"DQ u茪 X,x󸍓?֮Y$q_tvL]ŐV! YkqѴ+7{\C@@C(VqF] %%s$Uvq A!v i>hs7s79^X|*vPP?*eƱJw>pe2-[=Gm(t֒e]]15m-Nd% UPF^Q'T * pA|P c F|ά0ڶy3Yc[#wWbc #=oО5e5}. 9qu^Vt%]J:ylzٽzx#WSQ˷-VnwӘc33DK*h;g_ͲΒ9w"HJ6>z@e=(`9@Gݴ|:'2xд9(*V FP ֞QH󶞁^I1vs6p])f"^UV{Jo2gT9V}'/ub6zuc辯]? WEiC:ՔϞ872|AvI=hItSU%G$yȡ1XdޔD k9ež9l77YQog+[e+*_6c}X}25c){}'y9R܄NMQ_i9շ8=?!vܦ{֖W>YP4tO7WNk6 rw \35? 'ivG5X~^|]ix.cܫ٘!* .NO\tBY,B_Fވ2y SwKNS M__7@zTIOe\WQmԷyC}*pu4j/n{o&]]+MX! $F;KIHjhT0ӫa异1elMK#ܻˠ7DNڂQݽ#77QϠ[(T "*#Qbˬ.aXE~>_+txSf3ZFutDHS"PH%?`mߊ̔/Kx=1bThr1Ic?c0(W9r65*2`ܦu /Pei8^7@0O5n+ A/eѕڒffZ"î).RUG?n[ h@&}Y?oTrG3(Pܢ-eo'%_kT+E+!~N@[/b 3~9i) {%:gDLB(Mjh<në"Ls*oLߎZ׈ =?҂#h/(sX~vzny}& jh|$Wq0%/-__]]~׿`z}sx!VQd0-bߝCT8Zw|ur`4(TTYM8vs Diljh,`_O)uj$7RC!87t-:q YjhWZ(Gsx$zժîcZ賒܏UAPiU)x)kFϚ]C]ha0BYJ ٵ=ޚēwDBD*JS$P:gƔ]EmIȄ6}( E<-g(114`_lG<2ص{Ų:Ät⇪Ϟ130 =|;D?yhf{D=J+NN7AJV㮟 Bh gN'G$SFzC"<%,6ZN~: \~},s t1DE˘tޤȨ <*X:/[\5ƻgQr h#Q+h]Q[q&PTj%z4L%}ꑍJƜ(Q }?!47c Y~΄G>1Tv9sGG9rKxCdT48[*3˅J ?F@,= ^p)9@up)FE-dK ZmB3낤H)zĤ(Eq[!!zk+*d))bHTtQ$Ƞ4xlּ һV zMȶ#F[53-kk jZ?JZ;M"#{w7ސG(E5T{l<(L)(wr#ӥcҫFm䑥 c(O n5R&%N0hD#h9e[y$U>/xY?CDS4vQsbĕ#\ Y!|Lh/DYT`.+J2J4N ZPΣ ޠ Uamy[$B{2)mph'WHҙ:Ps^?Jw0Z"F?|uҵZ-PE h^ZZ}-vXaٱw/ L7D_A_L\)kIl!#Y <9 %N(-]H )L%Rb1Tvm#zTEE9@w?][oǒ+^{Z[Uw SNv$YAL$ٴ })J!%k9|]hOxCh!-`n2K)%t$;tU@|KvƟ`p8[ĨUQP; r$SoY9RѾ+^σ)5LAil^@uD_S3eD)$Ao "{_QFy&fՇE@ޛlěXgj/;o#fL(L.k<ݗVuާr{'U\_ XS6ΦWJi<|WgO>!ˤf|0KcfVEӻ_zRͳӋEI/Bӽ'y6zxs:,.[GDFFZ EǪ miٛU>` ;x'9Zc==_8Ym֗l6_ë{WV&3;Zu.TKw(-`owjH❞do2"Z—#~,k&/ֻB{T<=8֮u Dm܌\{&0__oV1WW>Vde~55׳A}N%O~e]nQN)˨ݻ#"a% k*.rVێ6߾47}e&hnl]OUުSan1-g*9'J[6݈qߵܜPY9[ %jm(y t~=;8Obvͩ\FDz6}I"5KS~JOiT),#YI2UbQMJ :1%(}Ld3ƅdUߙ>K},ݻΘڬaE *< :Fi_2Yڬ^0-Nf8ޘpݐNqlhM_&`Uf[8垐D+HmwyRk0מw  9+h?({=[gSH~SԞ5z!&hC>}XT:6=o sVY23̀M:jͺ _RK:#։ѢWh?$@S{Ƕ/D/V=1XcQOEĸKg+U ]rF 1 &%L"%ZCsjIFGBps jr 1LGh HZEu hE MrԜ*H>: h$ -+]quag)T%<* ݠ㽦`A 2UX4$圁ЀoYڦjǢ@T A~6+ $`C2X;JM!;krN'6+I' d u=Tf&^m>ӋO/]7=EygETyj%SZ{>S'Y=kVX3ǺxM+}`]g챆=XŒtRX׀u= Gk)oj8  oRWfTls Wbߕm3Q>5ƪeUPd1&R3༓OsE݌,X./:T"Z&ÿ̮́%lY51aS\`Svyfv6}4};rC#v }%t[HF7F]Sr@=SJfW ΟoFb@-շj:jv o|k @Z$xռ52 ? o}VXc /VOr*V*/ڹ.Hl-F5{`Ɏ{R_ʂR}V8wmޏߵ򑃵6Y[8>Ѡ6Vz|p??/|RwkEq1Ooܤ1ߠPRhSKFYWSF+'vEsP v;<*0xG;BśOuRP͚8TQ;)RvT\ Cl뒘cW'G3.Zʷ[#f;x_aB"`!GƄ$Է9nwdP'P.cjF̶E55j:\NfvZ،&z_͇|?| gЭI  oW\Wh#yZU? Q -DCUVHmw84Q~A6ykBp9>=?h{ڀ!Z{9xIiA=34 эuXOMy4*Zld1FA1Yn4;f2$ɢl)@q@Q!N+d`UdM6)]Rʻz>9@χ*Qeua13|e !5J)5ʏHEQ.Q?lOML:Ɇ 3G1F!*kٱ ɣ 6LkL}ԋ?>|g(`3Ӎv;ԙyv_3Пv|P.mH6}*P1NuLna,9%}I 5*nԿBɦKŏGFq$jđhGu*~3Fg65g1W[XpJ5e @ެn@ޞ=f i"pֵ?ŠBJk2B! Hў>ԡK-䂠x0,l^: 1ԩ*(iA' Ji h$I )֞j>> j}(z N~oQ1J{ _)dmg8L!o)?|= c5t؆Yr6#OF~iMڼuCEm*jPnکAf=6Zpu~\FV$4 )n'.ŕR54Rnr÷59_e&_ze+3X)|-֚I[ՐQ^0˲*&҅8o'"kF&lصM)F7#^EZڂ Auz'l'[xp1#AFe9/-},_(xS#-!``n0.:4UzYao$W0nrl/!jKvD'gA 4.+m !0>3GTU\6h >Uԥr5u`kjB0[/ !WгX'*.&uMoٷ4{opyײPr7LJ%QcP${,Y$<N5z/]HMQh5n1UJ)d 4;8gE*b/K$9[IEaI]M0JYyhQdBISd96 P }]!?7DQ yC2&q&ߣѝZ!7GΌJ? pz\<,/!ʨɨ ,n_ttO &X B#Js-S՝sʮiK zDtH,p)Ij)trklֿsȽ_H>y8܉fE*ϝ`B b,\ٛ+  ù^7 ۻ!zv8}LC=VCN&R\ ͊l/3\(c;T9Mty4*[9SET{w)+O`2:5ClX7ԁ^=:EWuU#X,틥}4*w8#V?i͜Rdcc(..rزUJ=ҢU)FޑL)SMFqs'N~ N\9RTa};pZSUJK^;Zɦt[bo^K,1ԜkMl)ge%9|:N4t̝pC*x6?G֝;Kɪ|dR Q%FŞVM@qYM;pRLl J:SGa1Ħ3Zeh<`>m;)\UJV㪨=©AGW ιZ I%5GϬ $\5UtG΁Hu_cy%P)+[4%4MUXfzk% gKj=,:S2,}K-=lyj,3&z2}-N_4pcpat6Q 7ସm=9{!2jx}p}/78cIOL(mnz3@R*-21!*pݒRuä%lEUUݶ@X!8R+GTmrI:UllˌqxQ:#B,ZlR𲱊86pRpHEL:'Ks@eçeR#F1 }څf=VH(S1"ޭaB>lYي`u* Rr˪/!FRE?Yt(TOpQ0f_ &yὪOjF6D9_JzWۻw֜8K'o uZۿU׿l曻:XOz59DK;4my|_U|z9|/ S]]e߅F 嬵wm‵Q$?_~/^g`pRT<2vLJKSuֲ܄[r'ǿ !<#lUYMK<@0SydWM*m2)5þ`K;>pFk{H,VWx>Ԉ(n#VļCH!vىpy*`FHixجg@ m3CQ`[T,PTג r%.z۽UnlJM^G>l؃^y3>C({*yM]ՀP!/@%ю _eGNA#az"gxe˯q!pK5Qja{pX͹(uىaDaؗʵC; O!|*M&$S0Ɩ1;Z/U=fa)&*N7ݘ;Ѭ0ƹ,-[+4># Um×fXr ri O  bU=+wGʖ0Yc c6DU@cf׎lCm\2%Co\s]p7.T`fDcV|w \}Z'jPJ>^^r7m:qzy~@ ˴z ñ +v /ý}񎏆/7tգqɟN>v[mxddB(>b'Cv=xkb-0Jo[fG^n@Dibx92nt{h-b-Wv).>붸'v>]u&ppu)B,*WB1_?ȧ:# YmZB(-÷_yA; ]Wq{m6/RNvm{.KhwHb;[խxs} KChb9m`7e}ckvEKKbx$jS`6a&@YNp[J9&3pKŷxx f.ɂcY¦6ћ*z^lDJ͜`,֋ѝu;pSYNz|B7ݱyQ: Ɂn݉7Oo,%Sav,'vW-A;7&/Fy~@7DN% Vہvi#V;0@ڽ!h7YEvi:2to ڡa؎Aᓠ[ڽQhnpI7=kc$QbK>| bϏC;C;{Nj>Vi=(;IB7Y "i[lAph T݀^n1JwbgݷO_{3=w~ԛ\k=~YB4Bu6 HO$Rܜ\IWI$"a6}(KyMzUZB JF$O09V:b N! VV'dJJL TbI5rINs-ݴ9-gQ9APCo5PCo5@M;0e:bts!; JxNt7؉Cd ;;NtOm;xkaH}uE>(t.iw/[B:P q.5R^J[%hig"m*')VK1%g7#oͲ&~s΢1?0gzq_1򴃅]H 0O;`knc;Kvکvu:9'VJ$E~([:j?V :Jpf[*Z++Zj ^&N9U}=Q1^{xȺd$gJ@h3F hYa*☸ ٱZ{)m<0g֐׈%b? ZxL8c/ a9( DX:]>1)d YJF{Mӵh"j5f-U!{e/39Q'G9+4 u" ј~k0Zn쾑{BnxYr/f_OY;6^b{;|Wta0r *VЪCLLc 'Bnp^}|uAa>6 -.GSy Fq*"*F-S-O#d+-X5WlJT0DJ$n.:b528GyCZE? ۟}ҳkAa8S2&PǦ}8 AzΔ V]|`toB O}gQ(I&xޥeˆnȯhaLUAoaq_wO99s?MrƤn^N~Vl0f>c#{3G|L1~Fhtv#B Am9E{L6#T G>B!V *$z:bY2pんrVTG!27NXY 6!7(M~N d=Yw3Y_n>#}7xE~!iulCU0 V1aQZW4&qBO` Q<_İqZ T4d&` ;CQNCkvJ혐Rs(/g_6+L!8݃sI:C?Oۛ(L0yYgⷛId"/2g@*h1ĉ\,{!l<_lDUGkK!wؒ+YόڴsYX7/8hLГ(ާ ܋T-*r$: IYr~Yȉj'mFWqo lQlcA0>P.0Rܠ:]_(EKaʗBX(GJ . SErɵ:A0WAFmnuE4㬂ED~:>zeE# PM}ji(Ŋ@I1,.b[/4)'Kro !Xa)/T1O @EoqXm/3NTsN554,aih-٢o{` k#ƒ'a,":ŚXT$ow߻g[G_ѵ\h="ՇԮߒlLbZɺXoє 7,V^fED%;j5GZcJ{j~yWpT<GDġ<ע mݓqj뽛e5ľӷ[5m01YZed]cj,{BHj1͑&pIy^]KLڎ:+^#<7Q7M(U_f{םXqyz҇l3".8M ^ /JZ8-GY-7eG⼨^zײ?㥘Lō|0'̑OLf̻߮ }1pNb*Y1"(Jr-oyʀO'L>»s1-4dVo~벙cl2E[HYH[͐&!c(jrX͹k1s9gS!fBLiF\XKKtn"-эZ Oh #A/э%13%G7zWxF-ᥖ[9bp/VKl{ N1O3B`+^*_ݘ~vhͻ/v__;yg+l;FUMA֬&V͢Ml Ju(Ӫ>~0, Tof6ֳNj/gWNT̫_kl}_<}i圓}-MY[ɶ1*^%e/mplDroe˽˗v8KWgsFǪ=[[zq5]VAW5WfzFړ<.\qV!zԻ~0tZgQY,<&zNVl{6t}cKF(߁bb\``=9,f|K[Oo.MNI" .8D6be Վ97*Dځt2ώɟwa#j ⮐`oXְp`oxi`𜂸0]F9,Jp܃g 4=xQzwGvm E¸=|d<=O|x7uY?U&Xh:bH ^FBBM6%k|5gТD/yI淟d7a7^W}"oA4ڱh]c_, ,wgc=^X-ݓÎ#;Zl+6 ;(ޢ6d:N H)Yg#ana cv35 aT*L`#H417KQ鮀]"w %wx\J.1fSpttC:hϹ[Qޒ𤄤8_ ﶧ}1-n1nP2ue ޷ L2N{1C8 Ohpw݃,`&B΃cGlz}5:꿿 ._zeKkK/&Jx-U+N[oEGKޭժy+Ko7;2I>U[Iv6k*b f0ۤll6eQE,AAN($V9ߛ`ݪ6tnʄ<`Ó%7oavh6 _u5:c/W'E+zS#t0t'pF*X魜ZbW]*m=qcOgXd~Gg#vȯyH"q{ j!arD =td#kPD.'U4g86 NpH\PQiŷf[N LyUZj`D;V(F6bK%v{kĮI!/5 49vq.m*o`xo|qq9H1(>ceh=/$.C#Iuo֢a /,-]')]a^qWUu?9`^>v{ ~96pX9Ruš E h)UfX) Jo@u̹H 4qBUa}_+HjXn/4v#0F 9 ͉ҡZ/k~(z>Q:|ufH #.54(㪏j*sY?:{f}ob 3(!n;ED>Z'l OɈ,qnzRS/(9W)W$RL `_RL.JclH( {"gCtݹ'XQF{[T+UĢbmZDZ$jd+d w5xCWIP{LAێ=7,6cj mE<ͣ(9nݛO_5fPa?h`24.']<=ʟN6My=̘er<#es6ɤ.Ñ1~LVSox =&_ CP^L ,Tvˠ/4ljJFc&Ȇlg)"IjZj5V | 4n4RBQ ɡFS)p-!…IZOG޵6+bSnV,aqvH s^N6ɞѬm9 =EY[RVK_؅MVul]_!B:].CJMf颌 |AY*פ5D"0 B"4If+'U[өaڗe6Uo)Lyڛ %PǒE5L))9YoFuMM[VKd#$%zMl< *RlRfgK(&Ʋ,[?e̺e=Fвڰ6@,N ǕPn8߽-x@ttŠ+zۗ;ʟ`h34E&4)68<Wcس;?"rYInY(ʯW((Gem0.{aY:&h4*6 .uicV6MYHiR˨u>7M;gI+ͿC&B[~=k4lwFҭ[@(L ptYg}e hd.b`nO'FCPeEwһqۧQci(Itz!z( –ުi$mltwĠ:}5UAD=g oLqKm0ahC^>sE^k8q`b(לZ'c)(X1GVVjUj}2ѐqFϿAtb:)[-1SƙVzVuyu[GZ0;$FqhSZ1vP =~ IWz\R+H;iY:!l,9TVBpŒꊥCKF`ebt/~ޑvj) %]ѧuT)q AXZJ׀aM5IlI6410ߢ"j% | 3`^!M&%/!o,d#&Tl>J6UJ4$cHoߡe[g2kkݐWQwjd k=;B 0}"WhjS|} \>(?OYd%;C^M,tOF&CT @=GmABx&![CMbTyF$tCp͓7_|j+Zrmmdo aboNu5H2(OG[91F?Gk-KL|b ѕ2r`T|E9 hQzv ZV PR6n#'+˪ ayh(c)IKkk6.wv=_ }|fY[;? ԺYqȿ"Y&ϖHZVYװWO[Ͽ+SN棿bsҋ~x3{"_G0;cNfߙ^13f󒛗Xt%ړOW\Mmc9#,L2(ʈi_HMbCXHsp~1P,/js_6j/`&#.4-Z} ++ûVޱʃ=]>OaUX%r->)o+OXw>Xi7J뀁UݝӽKBH^-%k|,QܸeyajFPy^58MK@FV!Ef>blkDْ$M+[DAT,h,SgX~p~ru/: )J--@4lëDFfl]&?Q3$Ji[뚨$)CA)j-L &Vj*R:*T1GjP׋``jƁDN˜nA$};Dh 0Z$a);ZEh46Dٓo(hn' (򩉆X{&=?Ixޗbk ޗ2:Az =)5䇼|7< TB߃zTCB` V[5"IQkNOE{Qo"j|ioj /ѪTC+pn MUCQmKo_M:c^*8^y=ɚkIȾPMaFڎҽQfc^W.ǂyv$HprhQhJ7 MTr' "S=1US>zP0#ÆBFtzEw)S_bFFUrEUprtgE~_|qQVƖ݂̲~59[?k- }.2~UtlY=:%2=&! GȪM>kP9X.Yd 1k\E+GoD?,* og̮Bi>[/?`X;UsсWU蜢7k^g]mjbeH+4FWEwW?`vxXsbH9)洕@u8'<`kɾe 'Jpɢp)2{-K)1&'=笶{'3_̿vuvTb̲72`q:Bw\oqy-$ӞXL NJ7)rod?}Q%~>G8?j/GK@WBԋE@8Wy.-y5~3oAy(|&WkфR+h,@D405Z;Տ~{~4}x,eCH|#cw_ U` * dmAͪX!%ur/!\Ȕ ڿتdpq(X@/S͢EI5jTsZ|ejQ63? $p @@3Y &jl:bnu2tM&M ,Ii+XNp-T+:$X(n(+X4qXh[J~Pw@=i:Ȼi[3դY;4=חx`%7.OJn 9 {+`+`zTy+R]볛0˟Njn-zYQls"t 2E0wY;XF2L1.i_Ώ.~z.v2[\;^%{J#4kޒԮ뚹.H))~B(#2nfx j2v:rDA ת16JDasfU0 s y;S[[!lj]X˯JΗ5*)I;e%&kGw B>}+SxƝaFQDƼ?`=[jz?fob jYEB]p"NBʊH5P7/.bm퉍2gx:Fh> _E5FLb4Q9& 6Vo9u`MA|`Z2\$5) 9p4+({4:KWLLM脏ٖj吧1a1Wzn_d^koO؝cFAS&"BFvlmr8;(eNwc^MZBcH}Eyclg5?t¶ɒa?v/z,yF<rv(0_#i+s0;R5²*Rll]c/NFX)F: +|W:on:-zvu&eq +@nB>!. Vْ3"};p{׸[odLGo [׭;VV(җR3I~.[7zGu}΂ǃBNB:}a+g<ѱHVƍk$5 \Yz"c=+W& 6(JcI4$A,+lbiÄYy)qg;9niPr;OU7 +,5b#Dgr }K(\_4H"EJM!R``͓fl.eAs:dr1 26)ls2}+hT&T-ܸa 4~L:C)8ox֬["NOӰ!DcXr|`R+$ mZBLHms 4)ibezD* O$)1HFSIieRoٻ6n$WTrh!j⊓rͫXo+R~k )iC("\N*!3n駻 "q8383{ ^8Ӓraa4fcK+JwQvDO9+Dх3ai57A <˒JZL$^dA5ALuEdbsUjRb̕.lٖS/LC(0:[r (:n8y"d=3(c|B+]zn.S~X)?J bdGL2A2 9wԀmсwBZ5ZVd]F}RQ{?쟿h.=PoʢWuY'3R$2Ua(fmο~_ӓzb K$y`WQt+嚺3A8 SƉʸ"7EY} pby{AH''+1nFg}Աy*mua[}tzyd7zj:- ׼jaPc׏EGVlSOh}70&lA~&6ӀrOl<B'D?o m@~`e^[mӬor~>>T&nԎ{B*;vhX6Y0T:R=v ,eo#{&jE*d6ߑ=}ZeB՚cnB/F/FmsbvK}l.oҘPYR9UU(._ɣkث;;bUdS^#plz=V|÷q){ӄL/g?nas )^5[e<=;^&gg)~^9)wz^̨Fpw;`oueόffoRn|?ˏm|x7aa$5Qh,']>0޶L<^Utbe0&b]5B%ݣR!82pL !1>VnN&QS.R0ڌ{"sZ"Ԯ ƣvkDa 0icEjZߙDe)bfuGMȬg4C 9?PYuC~}ɏF6x:W!+|u$*Kٵ̰yч!NE`N4~ARt H(ִ ,)Xߨk-:>9yDgP֘Aq3(S𢜂ЧHm Ӛ;I#B"0甒 &]m(+̓&((m]tv=5:y1;x)4s|o{dT0Wo $G=7HB3#ӛ7ґgrry -#H| hI],ʳ;MPG|J3O4fyMzSK=mo4KT7jZ85c@ ?bն5/8+pCH`T}X.-S* baY(+1|YFZQcn31 MYYv*B_An $7_̚AEӼKY*,A1Qْ2dJadi٧%34yYZUeHz&Dj*,ixb~n(8|_W\TWgY(KLըk1*{5Jcw\ο3ǮA'}C%#&4W ŰM ,>gGrkngCYR5:oqÀ!hmly*dfRԃ]}pL/77:N{훛 F+M~]L6=<`  Н\'3Th$BR=Put<ϓ i:-`ǻdQ쨐2,l/y+= }d3_8ɹci;:s'#?$yz_MC1+V6A݌b:Q^TZ![s2ZL{R\2%W (1$0Mr !sYҸ:8 w'px$;Ը܅BեECV$ K '<~bCO}ީ*E;`yC,:Խu@iWС e p<:nVoĘ YtCzZF n'L)c8ce#{=i8 vQ. S=5x*chԛ^_xBM |A?Aےt)Wsecwadb7 fF7d|+ݟq.jK s&dm9Y[ 6sS xQmGGnv6X8 A$WUc`It&3ZyM]ZsﵜY4㎧ElsVJJs&ӾKUhҋ-^-VӅ^ ;',xNڜx)n2FFq]cB)~!>]cF8[tXxŤJOP8x>gH%T(2V$"Xo鉶~ʺA=1dtC;25oұop[}SVk~"*c+^VÙh}OiA1\}@7raFu糕Cf[>[a϶^=eF<~9f 11,MuXgub8Ћ4ѦqXpؘڣ,*NQү)ʂ-Ͼ 詪U" [U7QZFVf{vun5Cg϶[i_e[kuLۖU.ˈA|y ~,xKW\C?ʋȸk-$8d1IZfKoP9ת2,79?{_F~fQUy ͮ\0GS\TIvQ|?g!Kɧr"t,>SyZS:N4E1vATzq=σ|/>2=|@fcy6C7Plk~4h6wd'hęhoC!\T9Gws+hz ZG:{֖Ղ#C+T6 +FY1b5^@AR1%9tQ+k ɢ'e)@VaeB>V^ÆweJ:z&A$ .nKúX xzC%RmOh6.|}'Z;:, u4f Z}^WBY9t9IGqm}vF<Xgs< $|YA=-8ͽaۿ"sƄvoo7q<f n͎w7#8+*Nv]Ulя  Y͂GoI*UB{!ϡ)VEn—K_QSd 9AMViF7@ltVWmWLfm< kT~xs e'Q>^ZE>z&O!3۩2>4*Q1UWo;#JjgzbFKfʈ \FHGjԮ?F]{]Ps0D*6!j HđEjׁoFHvډ}v^Fw֑%a"gʊgJ8B̦:^]a8kL]D6 WMj^RMJَfwz]*+x ިb+>0G[Yv8TV {&יgƕP ݝ[A4.7|a* _W^VnxE^iwbϯ^tW V9wx2  ^t*-ʿBʿxb8T;3MWMeg%T*TjA% 7uyʞˌI)"n¯/ie8SPҴ=ų}6h$q B3*q`\ ZFX'@iZ;f!ƖFr̶˫ZM/i}D 1&IpCɴ~֯N^|F'+p4mOuWLy~h&sH5Sq&DX¨1| okbEW9A+G4J5k$7RpO(Y,WËA7K49I!]DsnN*2yr'x:guF4>[~E^GCן?}x0NZJY0L5H\H b$dCu7ނi0Q8pʥ[ vѧK.=Q}6 $p2TSsDyL<9/;p!s$+[8#oU+wlxQ"xu*urkZ[6tmΏ8:n.;)S=3O>tu&xYnwfk,*kFȮ>U 4hh"V*J9R4 ha84#E[w;ʀ ǓQ@6ridTBRElť4~6rrї-D'$drƉRGt[>לZ{rK\fE潛mYԹ;-N{f%BeRś kKf49rpiE k5#ф I Ww# pB`$Q&jkJh5g #=P1`A%(jh5V֌RYJJ~l斟{ӉqN?^tJ:DQ,5G7/l4 "$#. ;IP'zxY yK%RwZ%Axs8e[C>\P eF3J" Ө4 m5TkUَ!{?Io%>H6WLM!C%d:et֡E#lB(˙Oojvė&*jNT8#U>J6dT;LDؚ|;FVjtsTI*G>YI%ȱtw8EiD6B5T!%x _,9;C^ݳ4"HAM)o79Bb9UN{y|?3=7\D2N8m6worg {(!NDWQi=5*sӄ&IkEQpgqMC-q TZ@,Jh0?.}p(=ǒ % '^xZ@F_c)^ت„QO/<5E}x)J#!`E%:ي̚PRJ RXc-.c@s?Q=s/X1=Rt&9dCKߌdrzcЊuwrz_rXj@51p9^ZxGyV>l=Ed$4 ZKzAN" SKڵM_ԨVwc4[8{Pք hpЊΟdWt ﯢUAIP󷋋m1^gڧmu~= gwƹAm8>v Gf(((hQaZ1i1xBH.i2P1 ,pI-Z0: sb!DIR7 D-nU\9 FM8lKR8'r/k3! 㮯6R*1N\emS\.(hf dBnY&/f@6Fsw.y%ԅ­etńS]gxdmchG΄eogv&McҷFC" E6PG5U^ȌZ2D C# J&*tRQrQ \hL"碢LzaY/ʌhFYb5E}dHBNz1jaYbxT3j؆䒂h`B^tr mٌ[&/u+_B0F\yt(lYM`5@!|&ٚ q(\(GG̴ A{*vi1)6h[A x<T#\Ԫka`-bS4Cn%Z”T" EBt@bS`zCYPJ}Ҽ CyC7껋(934&ɨ"RރtTR5u>$+C&*ob25i\Ë2-WrdhixBQM=#;'x$*ZꅶF sEj#4?[}Q5Fsw6RLe_hhso \AaQ9*"(oyԔ M PKT`R&(*-+wupbtiøPF@)gMԗ ا7S_Nw |]7K7gh_oO/>wT~iOCdoI`ۛ ̀ ˈ6t"}P򔎌[TiW4B\p4-@Ꚁ!*ACx;Zu';;q(aD` &Cz4L8oI7a!߯~x`R\5'6ݲ9sOٴ5[}F(2 ` ~\8l򬝾 MP |Bȹ4NR)|0ryrP_y_WAx3Y~[ VeŞJZvنٓV2c\%r^wQyӝXx-)|BzzԇB o~`m~0S̡+vЍe{f[* jݼQB ͸4kҭzߓЮ:7`Ý'-߮?X)aOr&nC6{ |Sn)pUMo->4GxBboSBnssaAk.ݯ* JM,}۸a7 rAn06nFQ)Q +Mhh FІ I(%pŲM_ԨXw1kg1,V8a &{-S$1 (s@5sC>VsѬQ P2:A[,Rf!roz\pT&:X-KZ@iU!2^Zox^\C%l@ 66K,ޯ샅gMn@T\vcqI7q>}sf?h*²WtuΈ%qqRGYA)b !HZ {=>yJK1dPM(B&jOTEwe=nI|YY;cCdϾ좑kvd}#lvNRm‚-ɪ/"3Ό 9mBP_âsDGBjM 7rE%&/)i5]lZowRs|#"BB)A̶r$xZvOͽ\mÜ9G`bX,xr%0mn6,v +U `eo#Cg}%XW T;f(wD &>n8ndTQms?L l) nC T`@^5L0//{A]4AԌ; EH#04U 1mW5yJ lj4Hc)V`%qC@F` 9p"((%h4nd1 d(:L umkjEVm6j/Junac4FtUibS#w4&YC~:k"ְmJ֘*x%om%$4 S@$-+Hl#TxBX} obMp\H˱8ÜQCA*8,r~82 pbM-hFM_%8QKL7򖕒Pɟb6pGDi: //TX6|; 3'_*U/9.=_3FPܳ uD~y OW _w V _OsF DIY*cqD2fA 00$ppcTH¿8Pm+BX'b5!3yHzTvK?t1=\ ѕƼ]d (&i0-x gr,)'LoQbs~&Fcj)lױH$5y$DmPuN8Ăft;XYb$&c[ ]MacϝPz5tңܩY^KХ+` @KDWd x_^P@U#즜Ch ߙtd 3z2flM)<(sdw#,9B:ᥕ,} /C?zv+"KNag8ԝp} 2n% "OV\}ZCRt*sMLԸQ[5V.Ⴥo  wp')q4R< Ò iN;#Ҋĕ8g>""[r0@e G*fERT8a(b=(]&f8Ŷe_VSqަ›6"vGuGx]b%I/q膫 iGvҟjWBnvOgRgwʅ6y^q-.Xӛ?3G0 Ddz> VmS5hZ) !Hq 嘫,p)YvV $(7, r I&sfqHQ]i|,x8W$q jsB`)nO^{;"|p).;~lO CR~Z7iVÐjW_N?6HB;rPgz_fd9,?HCڻ?g d;Ppg -A[ 61D*u#`'V@iPCvf^@kBʐ:37{;Ӟ_} n1"q~U˯3)6~/E;oVLm; Oi8# gc<wMs~NnrsDWiRJ#Z"G½H@$|g 9~$A ?l^GXnluGf*Etx8S,8xvv2 o(0>q7rJȩ:| o/eU?ޗĹ=0{۪C\RXx/XU\ .IZp/_tUQݞ'H~T]QcHzISt i\BSvo~Ӱo+÷ig VU ΙT2 6xxm1,mUUOaN !_v~<,9;;OّX{0++m1&c&*V{]_ \ `źb%]-wf.AVúZ6u۬#V]ӝ1{XbKgH>(St^>)BBj)[Z!8c^(_#8W͵Cr[eYtqEc%D%Q i8O1KbQEW`-_a~ 3ޥ›i#TŃWXi՞үŲE D|гϓIT@@&1a`;lh]^ )xS$R${s%T[u҉E'sO}_Dt I<ܟKQmJ21{h)|n]Q)&$o7b64+b,¢#2qJ:(ºҋq%9b8Gw.L֪0 {{ ;_(cy|3?_Ĕ$_V9r`z&kr=Lg??|t:qzw],^]]a"G֛nl&}l(MMlX1U8'˦!˧uM(:3˔HWqgsX@s=|`Gb@-֔n!VxI34˴!3BX'e{'&'g}6/L/lTUa,m/ٱ⣯;܁t{}±쭷g V"4 H)2 ĴhqHP犁A9 8hb[91$DXHP-0oIB9N&J BH1U qdnm&fqכĚklюYհFjkS 1ۧG_bkL߿!o~z~X__Ǡ)zH `ox6_лYgЫO73_~`|:l74"8ZXKj6VInn[ KdSe&%})OrLQ55^.Q;R|߻ &g^ '㘍8KYySl ї+n\t*8e5hh@Y{ Pոa&LM5Gj+Zj"3s':LcϤe1YY D:M-ñ`+^S7 7A~ҚgnEO,sf+B*DEH$ Oe: T*8\rm"d)ynB NRZr}(TAR#Z:;~ (TZ2,>%ꓒ]y}.?\b0 !~HaQ Y4,ǀ<8& ev<3- 91T9 %Bp-Ι!ǤrsgYGֳ[ylĥ& dMҗ&lez˫q|k f祱39,^R 4lMqLlMq !y65R;EXHՀ/_ƋSQ#hPmT%wes@N4/($<#\zdRT28RϕevX6뫮 ð}_ Ǎ^J*iG |\94|ťMm&ta"4xKr6'o3 /r&-T5 ʕ8b߃#.""d!a.Gl.>Ҝ/~"w^ E޳X1FWGOf=jw%(9E 9j}^C9tDu)o)(=2UxJY!C[{Jb!iTcJR셜4J^s@94"HhːǁIDia h H i "+1CT8)nYo̱~[9(M$t(r$j}%ѭ(hPomGQGK 9fQP}*EJ7*ɋSD %^b.&erw"eb)?fxnYj ڲl*O o.ȵ3`Ap% a9-DqX#/1L#CWo|.t䠏3&J -<,U $NvilN_]]O^VaX :y#&!cAO=PZ< =%, @ˌ_y'3Q#P }\LR$YS7eæ<5zcs¨%g;gجc˫ ]s[:xQ!n=؄g3kXji8=ēhV1 xIAuөM+(C%?HPV)A 6 mC5X*ں5*X[ V𒸛.Te|LM)j:NSPxAn"!k0(sUHpҹ:2rꂢY.hPN8iEx #N-8'%=kqp,_,-6'n?utOYZr!|9^Gje?",8$8 {zǐ*HnMl. N|iG6{na5IYrf7`mtdFG/ӌ"xq]kRf ci" "pY41'/ϲV;8RJ&/vURqDwÈ]~}Ak)^D^j2,sw^ܵnr+A 3=y9E&cg'Wd|nb|vzvO 3^kKd /Q=9'|$]R56};nC+Φw؟.IgiEu>Z}̯T&\M~1uܘ{me#Xu<㸇E+׸ؿ3?9ҽgy*vUMJpOO7N]8>j,iu2|dnΖͅFPEܭa%|faMX0݄h緰n+UY,\ NFϓ+ڈM6RP7Q*cm<(K]>aB*.B^.D 4 VHIiּb(GUN񯗟@lX7ln?{ۯ9WQ eV%Sx>`"iDR|>D n*D U?cv?=u9.3Ç-#GQ1J`Am95W#j0W \U\%r0W \U\OrS`Cp n~q &LcQLgKKul'Kf4Hڻ[XI|`9h="AA,"[^H2a8}E{H懾$*ATHЃ ys #m!T=Fn֏NjUf|jLxK-һxH"p23^Y[Uӥ\s-'.m;{l}?P;~&F T}@]{ǔ1,~AX)dL c#H`: yPB{ 0$J"8 wys/ }]$^D*w} (>{Qr@ݿM|+[QgNK,pmo*G3 4HS;l$j #Ds;Vci+&>ױ A0xʗ1(^Q'k.4uw0Gs[{BuB' $8kq3KL&]~?O=sJz<pV"ʂ_#e2>rW>毼g,XFX2ʳN#i - 6e=X0et.O#uKO#y+9LkQzb.Vpn<"TSV &Vk4I),BLI0eՄH)m@LePDkh$&9@U@h(>UcrĜ#2a$$3XaHtav\޽.<ݚ5m}νϤ%X.q߇H򻼹ĩy]ýa/-T.>$wyGn\Hno0 pvA}rPeCw9Һ5KZ".҄FNB#> UkM#fiiHGiwk$;[5EL|troN"GA̔LHSEN.$總#_&V/vi;; ? t)=<ŀ(D?vzW8Fx{4I+f 탸:i6~^_Lfٷ4xv2$?cCHyϩq#236gB-:l1(CZV@; y\mj Ap[vN7 ܐa9y<NWmZtڶ룠1N<3XG=kR\z}}9.rܚ̉؅.>Y]V!pωzHZ%]Q02YR#gg]'FiZ6A: o C[%cW9p6C<n5!]JEwD`6 nU&X>6'L5*~avL?:| %;Tf*An\wB@> FAGI@[= Yuޓ@9SAkQӪV 1LAA &rɵj[u_,q.l厫=nɺ~o._ qæGF6G*mF)Hi9;a2 utA;gf灊yL4 &ga2{ D13ܰ#CN)a[1/9Y(BL cFP cmU8yYRu;ޥF 1`5hU[9L-?˓v9 vdiNOooMf04w]酷?Pcu Wfd.YM򒜓ӭy''`Yd" ]qyA&9GyDY8Q}/ c]4!6 dW2tCSCY8 Vw]{u8zGGEh:B*ʻ3Tx )YQ=< )Bbu_;!J&uG'P*Dʰs:WX!2kQHVh$52VԧŁj si8vTXK)x hw~ac]-V9bm|$,͕ᨅ1q\|rݙڛ}y]4>0&qݪ0ùYp|`ə$-2]X8 K&AsA0yrGrv-0{\|@`\E遾W{>}4)o>\eC'hgųtIq+ Ko<5lة=OYہоmo [..\[9Izr!!|,f,B8a*b[NI>M߭]"wjt3k/JHcтei% bEDl87Q[R3Z沮e߽k#e̻y7Yߘg}c^7vyeQ3[MÿbMbKR^b.&erw"e1[:‹kYsW+NǍDŽx`GFD5%ba bLBXV#b?QYK)T!.538Q&d~4&QikGr:1 L@(|#* x`6 J`KƘA"§aD@qF/&fVn eVCc'4N*inÈ;;vT_kLH+mX MF  "6>eP$Îc񰆜!1"r3꺺1yLv@7ř0A࿁e-3]*U,!AJ9T,e)-KlY9V3cPI;*+_Xi﮹,$9&@j YAD@:2Z{乵1Uz ˕H(tւA=8`ii`@,!YX҂aEbevfjMZ"ok˅dZDwٌڏ` VTl:32CDȶf;Xzi7X7{/QQrSGL;6H]UŏV#j*U)F(GIJ[- 8Xq X*gϟ5D-azk/Ϋg?H׋\s'=9;m E62iL [[(Oc{lbHë<@_V[d+R.av _#WD0,ZK3p!}8`|>OYc&V7v a Nk?] ` ŧNQAoy>α8t<L{ס^Te$S9P=V)<݂&p^,GH.gD"y'Iy\/NZEg޹<*ڙہ=uޝk87Пw X҅/_Ϟ >Uf __bO7^?fqH/{׽^~˾]\/_ sڜ~)I@^dAs7$mRTwI3_:UF^gWWǓǰxnjrsEVʤ}k8HVOct<թf܁4Ϊ<*D=nC~55ݾŷ7a|o,:t[0'o/}d>S`^ʅ/fdQߕ}ya2;{ w0GWc7b1)F5<v!,p! BҏBXx?-^lL4Fysɿ Ӥ'49OF'|)Pd߼' ?)DT,,@?~ s..Rs%٨1_7, Bb;ZmElMOOھooT gtH r>XRL57v[G0@1^,ak(aKOAx+_XO^L6gq\dmD]PL䛼m(cf1monc*DJcmydXǝȦ>Nu^GESH6y("re#tq"e(r:LmwR?IRftn )4YN@ (*Y9&($kd H^`RlG yV:lǺ #3p#_I+WK]cR[#Yq55Dz5W!| |+:fT@}kִ=ś%1G)PN8a> øe##A09:HPDJ##VCqN-;j:l cM5)xJn ¬ThMpDXEbK)bhDcR'u#  BUJt2KeϺ2gIm,jTE, ؀1 6n$T`3rIKOUH*?8՗d&JGBKR7yJ;%=䱏~wNc䱭ckプ2Ǩ b}(יUNe^[c`>1ne'2ǾSJ,B r35\E.HA6O~p()r\&j@R#?|sZЦPgN{Ss*c 1(&9l9J7Z}o2Ah$7%[fM<3AEȘ(3^LrN \kYZ7QģӖ9*b&8LBDC^E#>(,gRxɁ_/5 x[- GI~n77y=䢽z^32J%}\b1F7ߢQV|k tZا*7H$}6p6ͦ`~mNGNs9:]mw[gV~̵DMI&I2/:PR 6,A,ΈȍEd!5EՒJ7,Z-By=M3+;lD3'VS.-*ՠQKc4x3+]Kq,١D5;])3/‹Y*‹YbӀRĝNk0h/F{ $_qc$~dx-XOQg;[cLF]O!C2t`E;+׽ ש.+`ҭSeZ!QL(m%}ŀFje4"]ܳCPXw@Cq159SBZKBRG^b QM!Z@Rpm$30cV+rJP}mn4st* JuAmz (AfM72pc)6YWHieedK$wis{ND3A"aAo|VӬEjֶA)!ʳ7&d8dV֖& BӫɎ\X}VTb (3I=8#a7ud Ttu&&#)h*z "i]-xM@P,AUX m{JJaө}1iPɖڑ6wTUuۈOiW:`)UiR$O}0f3% PlRnYA== )w^?q  fyAܢXn(lyM|ؤY)˛v+Ɠ9IR`_Sak!ޠ7PWڧmN]xywIpֱT\.{Q78nx&r,k0QY0"Om7p0<'Z~㔦yeoҥ;Li4XTӵ!j;OU:J`!8zɇ@8Wr `iR)E7ωRS=%RRH&׹c:#(?(1׬7Ij)51\/Ztmv.5b%rB) U,v:Q$e:)IKcblb'2J8N#T25$]@iNqQ5w`gىb>pa'N{fO{f3;1&===;Џje۬v@ʀEflʊ&LfZ{I ^3a.JIOf+32G *"G?dwADH}ܡZ0HbÐ yGUtƅ_KMrN*w 5U f`cuV$pn<ĬzW%CNw-,*Dߵn!|&FbBף R9Fc\=Xyevn}*Sj3hY~Ly[Ț[8:͖jaJV"k+6>:7w[]R+|R#X^mm$,$B?[Fz"(<=EOSe>)N Rg2:3LWBR*.)\y;ov7P`Bo*]xTj^'A)|U-|!|W-x3:[]sd}( sl8h W1޶k"0 QAb\{>ͱsw΋[7 G]I=>3⭐ƒ1⽷.Zq E5vU;ytB^KF%VHb=!?rk9"5ƷsgO`2  E҇TnǛl[2K nƄ0G>6%ruoJiؕX.9%I5S]bsZlxLi>4c\G޼gu:c̫R:ˋb#,OZczQ I+{2ՌiYp2.=CR>0pj&^|)I<%5:_Jb|d%~QLJ<-ɩp{%W\J~\l FdD?KOcxD9܈/?MSSKo!IT(+4mpPEuΜA 2e7 rC3Ao X2F+ n<ÇV~6:q;3ICt, @1ʷ{ llX Qʎ߅˛p!~3O?iR6^ ~[?XmБ%f N.,,i5t7ep"BT$%zPCX&b,SXyKXF@bO[7 l2OdSԭ>%,+lM$%;g4bIky!Ms[lPLtG,u?N2Sw67?d|lg.Ip̅]?CezSTe>ezm-rkirЪ^l ~˵6е h8q4"dS!1h>*$t\L Jdo5{)TRx8R }(J1;^pxw2騐r5`'$NwaX A]t13|87HMN:pKAt7DCg(`<,v=2p31q` *_ݚzp I+Dg%qHZ!šJw!AabWQA=ņZ%=Zk(4Pf)sr &/a`^khj6m', DAT"Qk-=E7bA`10C^;+9D:/AaMm ;sNp:tpJf/ YApW&BJN&BO8ƺ\%JW%D^i3y$CʀA {YSD*#G(jw?@ aGKd+EU8¬UUҵ^2%FSpwŒ6ӕ^3u9]_|%]:!"DiK'ON ̏as_oѤ08s4Scff@^p .s{en/, V.CKKl*%P:A#F=5|,[lvW\|, pfa0˓ԇb4XYcNW4j2o5t >YNR Mf;Ml).ҚCfͰrS!_Y^!ŏ=ٜuFm۬b5ӪӮb5?G"4k`&FQ79QRm]y]Eئ'w. Q3C0d&#x"#9 uT;)l.bQ{+/IYWúuC 變 dPj)7(>S /MթCWK~XBUդ꺂\Zsħ v]A.-*9Szx8]2, =tA.D^_&-+`趗mȰ|y ܩրԎx$#lr1I;9 .,rtd@\Q`V^A[$NtڱP"bٹV0Q[,6_O|ۣ-u[ĉDDCމ}t\~&FO#iV45{)y#6ıf .tXD(? -hVoOS"pN*$zqW<dZA>AT{xWdO[qB ԨJax_}98 1^\TIEآS Є'fk11;t.„S5}v$׼#`}ll㋶-wT94:9S҃I2S؛m&.k2E}?5*p̜t<:[t0UZf6`,k.7@!*|Zm&6?|*WܔWiNpݥѷf%p8 €üG4 9ЌʕWi#R_IJhu+Г)?}JVwzs>Ub<]!ŽfFF hkL\4`q84W =j;PtQmo9LQ?#R9[S1LZ DZğrgx6wJ4jRzJ4zDsf\"BuB24cĄHxJPm >ip\Ag8*;ru+1nxYrmZ8)Y 3k@|*eU[mmT-0 "HPd?/44rdTJ1(,u`2(@cɟQk1\xwHnVˈÝV1iF[*Qy4{K&Qͣ\ /ű Vh$p*e\o-l $]k.׌m(@e PC09,ù' K JuyQarC̣ZYuYuY^TiaT"cR3̌8K爱i}(±4QEo=ѓd/j;x:XAb*( Ť<Nt߿w3J:&B#D7(j$#8x8=isMBb#ŲHIj;MxEE0,5*W +* V "(~Ůǂ^`ݏحf Ak(ᔰFӈcDYaJb4" 38Š!P[P-g3H\O U rP}'yt.=s孔e.'?1d+1CHf&0Y㧶I#{ p' e6X9X)^ DfQ36SΖP0"9v1frepcc\@Q`#tRl1R{$ XƔfOm0>z΀NlPD1)@HU@AսsĉVΔ-sMšZ tֹCl;ϕSRG| jB(ⵔ(xsx1LI):!+Xrj5ztm!=/{]7W\/嗢jW)%]J>J 8H!'4tro_e/SUNќ®-G6@CGom$`BLMnXׇ.8]lߋf`L|yިEU_KEC Yi$ST(88H2CA +!IheZS 8" ,z1JjiXQШ$(28K,wZW6ZP \r|eGR`sU;7]!97}c0w_nǨwt'~}pKgMnK5|CO~~4],{\?@vhAo°2+ΏyϛesQ!`1{$}!RS, 9f@4bC⮧u R54Ԩ(y2'dve2Ewl{&7Iuf9#G0T\߁T1mb5Ԩ1P`5Ԙ1hx مZ9[ȫ 4Ȕ-k1 0&Acc0ez/A2c96is96rC#nFCY%<:@8gl3wlFyl 3f^| <D¹$-y⬣84Wǫ߾-WIHVZ㈏aC6i;yuP+[ru*ĸΎU|H\}̊A@It,@b"fWRItLT(qHϫÉq*q)a!&iӎrUFfUF PT)櫊bW1vFGU_xmJt.Ί fYCA|i~"5vQ< ~:cBS򣓅N7^#DK"l|bQ8?'pof ۜn{x?0.x669 سt%X˵a*K2`ɏS־ﹷ2r\hm}Zֶjm@܀^ҏѼ}h%_GqU(Z[:7b?RIRf$ׇCPH+1Tz4.WǼMMDPH3Lw?Nu`RiQN >I`u4}K=1/'jam aKQuBof6164 ^`O(5`V?;2Fq{ml0z\?ɰ!\2z {Kqq K&pXڰA?B "}"@Ղ ȰϮ97lZ簃Sj16Y1C~(UZVG?x">da:ZTV_ȇjUTD6ϼ^hmqtڞޫpltXuHޫ$(.Z{ޢNGT@ SC_Gu#wml|M IFP@Mj0VCZycȘՁEϹ텆j2eBCjI1ڐ!ƅ>eCaAcb"ڷJpöQ8xQ1mx o$ŅF-ԢByU_r , &6€q(X%3q#z[Z`oպwggw}GrH^^ݝpDR,VKT8z~p\$??EaC8QEz?PmrAv.EMn0dVo0#V`>}??>ޛKV¾3^N'y!H<5'㓩ƚk*nbd1"( JҌN1Q1%JUQcEEBL yr!v\uMrZEA]Va0` [s^x6_Q Evqa!ύm9a ܨũ/BL S_?xTM%fn`6fj" }0_o. " ޕccxz5-.b ;2r=wWgsrvoMӯV ũ 1K!X@4,Qp Mb'R;fi*9]|R}Ktvf ?fˏ݃t'HOB L'p9YoZ$Ks=.ô~)goXLӾQyڥ=_ѾW{~uMcVң$ZvXM:p{M:M:Wq2j(my]o 8k>Xګpd}%i%&ʑ(ĭ;kNoxY{,7^f_?ߣ};tUkd<7OKu!3|q9d7@962+c'(㷓 R#.! ::"ka4')ˌ,0i'U$()`2YWmr役2TvU0N֑$2h: 8J>H# g +uAR\0)N~55aTb24A קo1,IȦ q}Z{,IP!K8Y1dPڋ˭묽@$}!Q봟{I'gD{r~RB%{*mua>ׂOY7>2O!bbj\:p>)OOP갅PAX*7\$L6*$\ZÄ qXK ҡ}K{upi^¥$9Ჾu~EdsVA$. ]B`D\%rD\%^#ۯ)Q}>cxODq31(cQ{54Sjt{-~1idA'Ĥu*-[MWm@f6s Υ")V64Xq) -L OLIef(&)3QdirB-|C)TaQ*ii BxƘaY"dČʹ!,16"&1OAHWP$LH`ьc(b2J"8V2:.(;Q:рݮ_Jd ȸ٘1x<ެQf_b6o6WzW_S+rJ%@krQe7 hݏ;T_""D8H\r ߌlX6Ja:j0 t)3?Of=?fp% : ~ TwW"B W\Qܟ$# D}%[}䊟+ݐ9&kGf-mxW ? H|}Y4>,L0X?Fߚt1r_*:L˜*ei&0_YǫZ,2n6r>|ƾXy5z1 'O[V7a jߛd`(ΰ)h.]:Ӿyy0&w@t0'bc(DDeRV`DUЭe%=(ɯHI 9QEv蛷o XDV,> ijMa=9V?%K';Ϸϭ~AZ;su o-Lyr98{^ThHMy:vj Q8N/PΎ`&经cGBmzBxStWI!6['A?o/;E]'<~~b~,!8Yp3s;"HrfX&>pC**ENڼɌ#ޢ~ i+Q["~p.F%n/TJa|hZk*45ӯB2ÌYL7}Z45+bZ$4r-iTP{^Rwmײ zID~JNxk=zA蝀zj- uGn`ٿZi %RNxߥ*!+.e@v%|Ka>ؓi,Vmpu`,w ƞ\is+3L囅H\} M,8f!D7}Acdw좆"PNxhC`q,ߢu%x㆏p+cyX ':V Ӧp+vyy'4_j UF`30QT Z]$#hu3mwJסxn 8(0 k\:R HDA詰ID N#Ť ?q>5ÓĝUzx:&߹á]0OV}V% 3`t v2 kgZሕs*mzrR\%l [+Tn+5,T%(GLJCgYm1Zmv%A W#zL$./8RDTvF |5Ce(W0˩rR_%k)\ X^5vԦ(Ņ HUč2(M)R&=hܚdZuP$>gPssW#,V ui<̴ kP+W3j ({T=`z@nGB-U$DD5KnZ ۧJƼ8K\65j0avY]3Vׅ֯CG~3W;c!DOm*.\K,f6x,:ˆBvwM4vaM>Qq?{W67@rwIfwdg\@Twrl.巶dJ$i,S,*dO4qGccq敝O5KEZ ԐxƨuƬ9@UC7K;a-&ux hDd";pՏf,43ٛbgdgd4$NF%%JJd=՘d;O`&s֨L,_2HQ<"|L=p:{ I7~ύ>[d 2TzWC'QxʐgrM6 J2mE?|,.W#<nvŃE~|jOCvl>/>,wlkCbC-`>p2x&f:XػU#Hf2odx#e^}]Jц8P^tNMm}8V)HH!l)~EUg&Cok~Su!:;T:BBKtۥsiOamp`׭qvƥ0Ll~cCG ]3t$H0 F4"J  5MCtȑ0+^96ҳhiNK6З㑘ե(e>Be"2័Kpx Y^mo64-Nb5_q Z"osc'.IJ٬$#'<u?ʂ\6,R| #tȦ,O߀hH1o9:A\&}/8K*/7J[lk _=ߗߘeW߂~Gӟ@ofGƇ?2>>īMͫ߿o~|R?~tOX'ٝgZ|m2R՟AZ&73?Ǽ<$$N3­Ag.rD OBԲNYJaҌ[$0DyA9I&VfހY [7~^-K.Y?};/}b 0:\g&QRz Weyͷd\0ߘU'EꭟO|6Q}>kH~?S~Cdoa :[CmFLSdS\ٮTʉ01Ok8IiZa-Vb(˪3vv:V=}V1[H>ش`S+ҁGIt܅ջuP i>IZz~9XArP:ǃPe[\`MFXPlki(o#;\[ g0/cp Uj'q~O&?'C(QtZeY65 290ll^~0k c_?٘AKPNpbPɻSѺtɯG8TͺKe8d& Z8D K ?>j$b|YY[ҁ}1, <_Dyy 映Z! #EDZ"RǙDz[c&#RUt+\6xTqRxq{Rk77,:t\R$piiR-ΟzñF1 +4 _Ӭ=ML _j >zCuzL=}sO7-~:6N9 TYzQ4wt Luk`x rU U2pWof7@/ ?/{{mAJXVX $?h=[[MZ LOn ՠx&ofzOFC:DJgZ7y(*+$md=+Qd#;{,##R^vZrF)njUmם-(вw_ucc[CBS!K!!}S_/ԩOյ\+Uѯ ^ QqW[ٵnr,Ȑcq)V7_(ޗB {*!EiT5x(G $RL.HatPp@*݋HXl"GWvP6 M|` svalm2^Z˙] ~L]{s!(*D=tPhE*lN"Jru}.J᝗!I+vԋP@=M#,ΤZ^ =4Ɏeςϻ*jjnjU]*^`ztM`Z=k%|ׯ2Vwk&,7^Y^-ϘrXLz`̻ͳL]CIW-7л*Ur<ӥnPUl9;a/ץj-q>[pWS0h?4:٭kKEO Lfd,Gs@ 7jˍڷQ.V+ɗ0JVxhuㄤ)[*QHʨV( |R|q4OvF뀧 WrNu 1|w:8ۊ*]d.PuT%!Z- 5g*} vj!.pFV$oG4n땾Xò'0,CV*3x? dɔe=Xכ?NTZ5|/Mל#!h}!c6jq"6`;{븄d@h[R$S3V:RJA}*URH WT%:h/|/E&Uz%V#c䫊RF#Q:oӔ)C {ӮqӮ޴kʹE&MrT[W"hjO7~)ֲ[kQ.[糾Dd v op1CXc.1x(䫼t&_3Z`MK%~:fނy~XxO4eeyӸ 6a2_ϣp w$\K{CC: M%F !l5+Ȩnî(j |{*(tU; niMCII:ZJ8hːkq9zDA%iȥj_Ӭa7_a-Ipm 30fۆ!~yAfl.-+= XUzPXVV0QQ .iߍ 0J6:Ho]y♦yq',Lz$nTn(]#l׮)%hѶwgS5~ىߍ 0Ju{IƝCx &R̰S?HGN>@"}@yʹ{(r@9P$UPޕr.Q}nsQW˭0 W.0JP++Lܯ0]_a$}Rn#:RykV3G]|v3X'jKzw6g6:?x9a.O9|fg;">]| W'$?mcCk^f>?ʓ?tM4, ?3߂ȉӧGfU onY*5K5ՌѱAť=Ry-bxsZ8+>T4Қ>xkYb`+*_IYR"tgDNRļq8!BDNl#(ZZ5'RX-IBp]k.ZK1oohxX,#,j"jX,V&u^8dR`0GZ8̝2'׳t"_ҹ3(07З@ޥފi X;aP$H_!+ k̟{'"I#.#Ə HE H Kj$? \n wf~ iŇ/>,5pIKfFHlJ1YĥsA/_ j6>6\(Dd"q <0C^`&QEi&)"9Χ5d [ƣ(4u9NPtO5¿l?Lk/c)3ޕ5m$鿂ɮP6$ñ3ڡE6 @dIQ`kf@_fUefeVf:PGA0 J>G>3ޕ 5 $ lˠ*$p L)?GQ"/c10Xa dgºO}dh1p(ZSꋉFZ<G1CAL\#QKѱ 3樂/ф &2#JI|<8dQ(Rf 9OTGM˄`qo{=0TWTK(Iƴ8b"EAy/C 䴀mB 7>ٷ>N'd{G$4Q(aRQC.9($!p܍} /g,t~Ԙۛf{!hA3r$&̒(6&(N5)LJ4S2[mAUx/L񚢌^~^x]{]X&.AЅ%t1FiJڌBڍ:du}7ïQ1$~w\:xlU$ġcofW?0j-gSp~~^[ƿK߬bab]/ 5mmDK{Lȋh=t4b>'#ز<(GA P~µ%XY[)o:g3r&gA8?XAlR/x᝟Nc#`?PqX`n%5wvWtb˼/*Eu B6scejC˥ٌ"B$_L׬F1va͈RhF~f'حr&5s/+MaIl?$TNinTUb/ -nR׳aUfZ>77lFʪ9_Q.FL\X"-IcQPCo|/`nlZ7զ,d:OpzHPa2cp ĝO$֥蟮01 V[q= t^*z&@H^u4 ]"/>Sxm_!2U{ n\o#ʂZ{ī0=G:#y-!k0\9k4^=fQ4isxı'A.;_' <^-(9H=1"؅.49XE JEset@0SD(VPTԂc`-#WX1@:Ny-17qbD( ! ˗)IqMScR- ;eGdԄ#e2Bf0 f'+•Zud/;` #?MP;^ֻWj %Yd0eZ :/ɠڕ4MQ1fSb ԸLKx?Cӥ=|正 QU' Kz1a,]mHpid"ftsnbgĤt1Z0m0ʟFY'=^*4CnoU^Z-=v-=^rE ̆sی&r!zYǪfY'D zNՄ1waMJ1sJ(zնǯ(hT<|Q`cH.VU2h"S\TݹǤK)۾'q$LcmJi)ci6 7⇳5IȔCR LqM Sn;oD_)H-j(Hn$մ/Ж7N47/{.Μ JBDB fnug`v/G2;,@N'50; AeN+*+f$ TtH0$B m6 Dh8noj('>(Abĸ9px\9k(ǘ(ƹdnbp1B 6:UkNpBWCh[W7nAtiCA)V@L: Q p4VH6~!gQ&q)E>qN$&ٿSkdfRGC`q* O~xd>l#}r/Wg 8Imq0Rsͮ&ǘ`ܱV8&3ޖiqk^#DtV"fɀIOoCCAH.rJZM׾*$"j7'%L$m6C!<&kS?�&e*D}\' GſBod% Y栴XRIX7wXj1"Lx*1q97N]F>yQ\M?Íq1 [dQ>_]$>=2K4:Y9J4@/D棃˧3EkQ{ikFhXĊIB\7F[.R!!T 2І}J1Gz7`5C cR a4Mc+V(25)[5Ӛ 8wزc(CzKhkILJg]8ߵJA+}+S"ڜ{am=rf1Ы%2ɬdW+qt؀/ ~IWKA(_jp_x : Z &BTDKDB끓xIkey'KΝTEʚ&Udu3Ƈ<c^!*fx_+%VM{sU+|:@V4"Mp\.,`=F]Ootg˜w"tp[Llꀦ WO1!9L{q t-(BQ1!v1c" {~d` {T+ΗX(z|5wNG78p$*˒}HPcdNfm[VĮTɤ [xϹmH9j'^6*dBgBXiێɺ@x8F甶Y.mὨ 3GCpA N.*҅ J owV򶷼c>3ˢOc+!0@8)#|SOmuʛ[2'!L1 +`BIB`G"su>[-o}At/#8{Oa dc_22p*pkB] "ԥ>mMJDNM9 5 glMKհpف%stN va49\aܘǤ @YAVr-=4i*(Yâ6c|Vtj61w!Vm/F%N母(`Q"V[^ɕIɱx%8Rū(뮦8X\.)ls-RzD`ɆN5asLu)[y䎜5JU=NncSz]B(s 0\vGkⰁahZ̞J"̹IBS4S19jF/A,*SBwyڌ呋KժI=ħX[ty;Vh;|j%\; H l>_fjGt'At[:m~ڽoY{J*W`rd@O$ǽxRZcDa][ٲLHFѽv(W{I{uy*I~:_k$ҹj5`]8J^>8ՙ h9ҮS*̖xUJшUtEHڦT~ͣ%J5:s~G;2ɦE'˻Y:L7H镩ltӌ-.E~y{Gh D5^]f!\П2U?zr?@H/ub* d񣗥qI|˗Nuh2)s,^N4#Cʀ6}LJ{SYL)`,cxizE[LeJ<+>Xsh^pMB'Ԋۉ^ p;edB;YZ @F/ͲS=]?Ji yS2`͖x=}Y& m(ؔg}{iZ_]?/X=׍J8c*HQ2G^__f#GկL>QEeOg8_=293w~˳OVB/o{b>{L|pُ&lˆOX黒Fچn=ts7{05^{r [3Ui4w댯VLEqJo&f2̻^v{wh`bn'Fuʒ̴ Si5k7=̛IZ]w랭6bݦg(O ˷ⰃY;֜W6an5)̄dB/;k;T3A.G/=5x/s=3UJ+ä"Lѐ#WV?wѢ&Hk3H>J!RXq.*ڱ.!̯6_ܡef 3Gtϝ=\^Ʌ%Kv Vjys`KqM}>4ђ?ҵOɚT˓t}/g.a:%=<%Q tr[u&a#M+ϔ**%gvQf7  NҜZO9AA,9YT)b뒽Ӵp޽`Qnks<#ƅ ”֬ЂpJJK׮Q8_xڟ̖5@2Ͳ,hMYRo $+)| /j17>\o $TSPxIVsF)w%sϙ<Ҙkbwz3K#`I7^6_I*λݎAwĶ(c1 AMHDY!(,5 ynRptA78و/ȝ~~l[0%遼ŢY2."/VKB+`5eN8VBFZI>AYHesL퉛fTLIi;j!,&E{a&9 fn/1++9S \H,KrVyKgxMqABs#DuL R1,"r@9 \j66SeS@qeҒ*U..6/A MsWB(9m1c6Z^y:V%n8 ֠$Swԝ3ABXb5hc.ZrY QYi֝b&x* Bp}zϪ>|o X5qu(i_ŕ# !r9J09<"뢠D7) 'L + 2GEѡvIiې'r"@`* I d~e괖AXT3?Y3 7ٶJlAD{-D.P+ەb5+d7JbFfo%ㅋZE.orsbL)N2ï JWD=)J@ʥ->^ū߲|duYsԣ(ɝfoR̪t5CU5E7]#YGr,8bfK}G؇_aԋ_^.U01mRaKh?*hTjcs0a Ü,фXۦ 5AnԲC+J*u{nn%ȵJ:R0,%UAEKGŖgl ȭγjɭA!5-Jkd S8&BF6p'Nr]lyFimsyf-5TVz|X9 D|$|EHշ#7ZJ-Ey"Hs&oQ(?M$UG$ߌ[qS\JX> `œK8Pč"Bg4L/g8)&hF #4(ΛKJp^y)nܗȤbşmcSc펟^,O?ߎwOiŰb_Wwe0(KPѯtzx%ᢚ+R`y.D/d.0٦ߔ9\: $~ܹ>+Q9w;}3^ Iv@PezsC%6 ˜dLO@RaAuw\{[h7I_41o}Fo9em:H1w#P֔c%Ft ,"̑lHQ{AdW[`iLB *C@mţ&#'Hb-@Q AbjAqP(`L(FXW>ĊQ~躞KDκw 1!Қ<'0Ֆcꗶ|),R7DWWvU",|~ YpqVX}-pH׏?xrԚ ƁaHs-W)K ,-@ёh^5Ds$BQkr'R=mo[pĴ@jLm o| H!QkLML{8N@X ~R>o~8 6 3 Q{SŖa!.uκ@ctE'wQz\0$ \ Ky$z0u2GL+C (%4z8 F+@ܼ!1C6GrD99T8JgDUZ1> fI,RcVXjP 2UR i8pڨ!g+gdb\?]EwRPo$=zp"U(̉H'OG&j\c~ ;cmq4:xPI,%F,.Ԡb䫨׬נ E+;ݍ\J:ʗu=]EK,dpа"|F!|{7B0":ԣs|{͖3B9m;JSv6{tfob`n]0:KB!) پ6YP̍L>օ#*͎CV#' 3ՎHD0 Ҿ`ƹJVD15Wr82|L*ł-󇱳AT.a2 G6-Т?i̗æB R]Α¸-< "y~"Ooe IsM5I-dגMX eZg%yf m@*0GJ*GLmnJɸ<04Qr,3 8'j0DgŐ|?-Lv~0\ )'}tRzEc ]8@)JNI^ET3y||*͐snCJ!DJE4xm#Rhup92RD2NH=a$VmߟL0UL}ecgb~;]de*]!i$ܦ{$DZm`{ I"UgR`?8cM}>ѸT4k,2=_s3˂0n79w`قwRmfZ /Gf|To}9 b Aۻ$SOXv^"Kvnu|WgN|[oRce(^>)ҢW}R T}6wKG]Н! -"5A#C!Xk!TC ?A 5[}jf4AgƖRZݗ"g෭u/;>ÏNx,wBu_rqBy/lwV?> P99XeZ'rD +KA}eqyo6u]-QroJԦ)v}ĭָq'zc W_ػ5R ,3}Q(zpSr5pn}{pӤooɿuARH4{kއ {q.oqﺝ@]Sϝf"9 NA<:KߗjVSRYɂ\Uʆ. ߇[fXʙA70|7 qe)iI1ͽN}a:!+Wqߌ|ܤ_`0Fv1A*`?x0w4J|2߳%SW}Z=`(,92~ߍ}3Ϩyf s F7 wmmJH_\s*UIS. H^locHC{8U_wh k~.ibm\~^k`}sًnynrl/tj*^d ZCwq>3m uiOo&+{DS|{LIsKvJɪn ݽf1k΁.#ݲ9(:Y{\B0NMł`F_+۠{{\ẍ́l/&퍿][' 0nxJ1v:e2|߿12ιT k^ C|hp\.gKCH"FG'~J$F)Oh IAC\ȃD]gʫ~Vj(Z=J;Вd'\""Pzy~-|V>ډ~fq94-zo+،nC[4J^A_NRqO돓G1fy׿&7s'a)Lã8>|u ~!3vG}uv93>ã<:n;w[0@?`1(hGZ0v71G=zOC&fJj$V.zu kς}3w^Qqֹӧw *#%2&O] _NDACiɟ.ɇ9K#Oöy6cw+(qCTfrgNX%D_ؼݨ}L'NN#N;JvFmT9X>'"N"*cJaj<ץKT^[Ւ{Քz  I͑[e$*>@4D$hqA'bmVW:\ '=">'v,wp!b"9o?d9Qaˡ.CDG!ܲ'vd^؃~DC?p̖oXit&x0ڥX~ڮTΔJhGjl եn2ๆ%BFY_IUwFY(k2mlJַtզAajvz5 L+ފz65ʐV5ʌ`Fge RF T}HeԶQ԰ 7M2|R`R6^X1Sr<d~*cFa{$A 0_#O1t$-'Mw17QK\3 \&Y `?~[^?~0;QchR@Q ]`M.ne!MƣO3ٺ MӨO3IiN Mx7ό=J?Ƽߠ 'o+ǃ!qSA| Ɇ}bGP @PMgM-'+0D)GoKpYi)Q3`,gX.PMs,'M-Rsx/gLK"8~r~ڍW&9G=f8#d~:O7\w7K1kbPJ篐;r=V>.5%qإ6bmQ(p8=:^yg1r !9 2'U5Ds$ՂКe'<'(%'v0*STp.δHb)#2uD.q*C!R2g3*iHL8f "p8ֆK3+e$Y`V A97{T낫p=hH'ʬQ݌ YMB|Ae 7z1^ml ꟠*Bl}_ Uy~OCL7:Ɔ0Wn;LJbdpOlL!e`BZN s/~ bI23~<ѿ6ҳj6W(GB^V){}մpX֭*ʈN1nu 6Dn'Zֺ!!/\D+ɔǻ}&uʃ2Suc[w] =Vye[ELiΘ{ +ҋȴ*ʈN1nu; "-kꐐ.dJwݳnQVeDX"L 4m*OuCB^V)%zi{ߺCкUAѩ:1֭BѴu<Ѳ֭ y"ZA(BRºUA)ѩ8Q֭[SYnU'Zں !/\D+ɔcϺ{V65*qԾ[gݪN[ ELy[7&{VeDX wS^hX޴u<Ѳ֭ y"ZI} uS"JJVj1mVVhj$䅋h%"g(&ֵvNNbLXkkPȡ T+!blUFCIcp.UujXrjeËuxj=k5[XkNekTX{}}>X kAj)㰻b@P PzaU-cM /Ǭ$e1s̵jxx9fHs̵jF /ǬikB1r9f`9cUb1k]9cCsab9k)Ec&Qs1\&0c&Hrs1\&H)c&Xms/! 2r1s@1%9cUw %Ls1׫  ,e2cNMc&xw1sN=as1\&Hc&Zs1ת 63s1ת cBĞ1\&͇cc9Z5A ]8cFMPXjQ׫uT8-ԥ?aEaRltA4M8L0~6.|7r^Xz<./$7+y=tZalV\7?ZiIZR\v׷~]o2"fU (Gn)5Nh 3' q@g9D#iߚ$=}^ ۱'O^4A8dy&iˠ!kK/`ݬ}9_R̡4 ;c ˌI4*HY,S|o3xsy ?z0W;e0m~1M˶α PZ' 0nxJ1,)i&&⹧8,$7JxLHX -BduzR+ `VA/P%z`n mXcE߾[ Yɿ*O]X,q!aC4Z-bNQ)aJjb3!NQj&xA lث Q>LW~ᗹ ;1 >e3N9OFFSpn/'ჅY 0/v<f|;j:?D!_7׫|DnVޮ1*>[/}}d%}d Lg/F+)݅@Ad.w GX4_>0N  S%Z,1B,연"l|k@f4`%``3@d6C;bmO/KBN5:XZ +X`!8d{6(, 3 'Plr=c^ +$@3X2!+SuEҿ OE DJRE\ a%c&0 aD!ǁ"" > 9iʸtee'J*#u^f"%SY! B.B"S`$3bSa54C XNpVoHCjdC='o+$ܐ4Omзi9x}zO@hM׻޽Boț˻Dž6y_&7_ ""A8)f%ѯb͟דNj1|B߹6fw̶ y=CM/\+~F\"MƣOagmw޳q*`| ?Uz:. /Ohc)( \?*<X@e&b'B'GP!ՌO&CQ: iwhd>3ǹ8ޏ Y ]|0˫`m #eو(pnSvQx_h;yY$R~oi˿lڕy4:C ϝ!pU @E5TqAEG"Hwԥ ޸a$iKFRC}2|u.;v^ĸ~Pv2:Qa sb*eAH%1!g )G)@SI5*}3HIL1@-I}(/cdH&K\b~ _),>:5=lN̷Up~V М+cXB$7xX&7l D 9"stEe; -,<`g7`~Loc@}:_La=_̮+߮V9[^j-z냼^{u:j*lVƳW7Yp6Q4K4<^Ypd%$w/:W̩2MJ pS|U3WF`UlK$%yip+ҏ߅^`s*Z >kh[DQPvO85`~ 6~~k`wv Dž;A0ٶУJK^3zѾ_/aKc%FR !gBj%x-䧲D8E:4ENU-9 +jLhǖ2aF{$EX?|8(ǯ݌])?w2iLߘ{Qʵ'[X`)ki2˜c2B3#2u\ LEFJ2 uSȱ49qF)KAJq%@B*]oGW}[`À\]v6~XL3g-"\ aAF|WD"XSKj6>'S+8S&W3ޫVU<=cZS? jr[ =Y.ȝ!SBlp7w7̫7X/V/t 5Rk&.+=?w<8<><0ыV_Ó @ k01~wx h..,;^[W8bc#G{:zU*EKJLȍu󂷼&4k_ Qwv9uE?sམqwqZ'T䱪CQd]Ƴk29RrBo8_ >6֔>ŋr`BrM{k fF_8apL%E$276jȭ.-_e\鼚ݍUY\7ϟ<|w! H>;i#b#k]RP\/F/7%oUL(I:NjH_ ̃Iڿ&b{;_ Kq ?'77̚]4b  %j/gfUY;{Ey~ Ȳ2eHŚVbe!"yt,o;'2+cS818ksd|Rʗ(gB%Ӫ©4%gJ2y+W0]`8< %c)N+b%Ʌ #DCK yQ^[>#W`xJ1fCV* &Z VFÀvld)1F9 KVؚc*35gnLJ@ٴ(H, ɱLB iZE5{ԩ 9@7QQA> V iEV( 'b&/{.\ZX}%Z=^^KU`\srx? D\uw 4 ctGB`yuhDz ,q4dw){mfрhWĻqHK;JmF~t/1$$T,Ac@FdVTxxnzRȚ򺤐}-[N/{vk&cxv" q_F/53[=h,,V42ec19z,ڢY§71nPa ,dC&l{]7kg(V)C'VO:%Q=.-[ck =f||/R~eEL߾\GIxKHf3DH_(  P:V+d]@KmŖ"T0~Z>Ch'>o6yPV9k@/tc=Q׋ƋU6sbҎDRYO,bG/ /feONGĺ0M04\c<8B?GE7u\X]ꎖdG~s: 0hWku9,AUct7TtEi.QiHRF=fۢD&%gAlnPl->:6g\J6F$k;46Zl3oP͚cͬ䂗Pݩ$v> [sOsZSa.Zv xf0cu"938!&cP3mkns.$x:x"`ǣ5:q- _9~BvǹH>Yoߦz$Ёwr%RuDCn+^j`ӎ6U)1sJ:j5Dp"rZ t`y}A = {RS7*jXvtQ휍S!u#e6R7$| X4f-nTq*;w9&#}[mIEdT3 .h֠ayammE(PmIk*/7JlKb]W&9F+$hd;>9 }t2$u*j@2s9vXU+io*3AϾ613 _:BP65gwF`!umjSB9ak.zJ.Ja=vh•~ɀp$Jr*&Fz"{6" GK4 ł"Sc8=Օbh(:։!0O#inҠ'.0ǹtۙò}Rx&#6C}ݑ)dw7'c$,HUd&_l,Nȋȁ AܮzӥOC!Iw lZ)RY4:p ;'4R:^) OxJs5{eЎy{kpRRv26R.OPuP>VЀBAkJ 0+ R=o>),#- qVP*yY ;$Pr[-,: Ve A]/M'? D>}owUyn=]Ln2 (&!ιS ,ؾiƒa^g@ i4n-UZbZ=8n䋷eš/?Y1Z绒N)Q v>eM)W QPS ',MeuoѨI olǤu)y) c҄D%Ʌy$! (Ƒ (ϥȉ< 3PXݍ6k*A4s\)ʼnbaGâ-Q"n@NV bhR"iP @)x`$X .Tg tVaDrϭ&Xs1T%2T0[d 95[p3$V,85{$xLiIx!Iś!PiX=3zp<16 b XSf a{Vi2͟go/*3=d@5o/8+zS+gj]~fͮsx>XEqdrKXxV^->Jl31a QiJusdy2-P Z-;|E7HՐzTvh,Kަ(O2X7Uj>tcKfJB:UҡQZ֗5o$O8 ,CJaZt L*ֽ1KBԾiFp4):%iGuO#!DU§Xۊ15,Ad]z\ rh3 DV*$QE R$R5 "HUiQ.!78+~J#1Q,!r"F0"JQ'6xslKplҘqv _=`dBx}R^4 nl:I:s9*9&Ɍy2&xvZJC*ИC0R;B5!HϵX5Ū p7Wg0M~,.aQn7//rW*Ey7& M?{J_UuOOb+w~ϺlA%e_.86XEgnR4G #ۧ%uIϏ;-l퓪92WEv:r$SuuHa]Α\vP.r]Zq YIW-fyw[X~~#I_f,Ee%̈́#v{o(i|f5q ,83ʼ~._`[q>Gڇ YʳGO7v0}߃ {d3 zdwm󓽋oHQ7׊b帅ҔCN!}w=0޵p+OrGT6\ޥJ5qjC/{ʇ65Wyjjٗhb<+шۛm(h2:4ǥ#0Њ0eB2z?H4O%E T;%OE1dC4I (OԂ`? ׉ezh+fvYJ챇Ě@KگdZɼ j&HcMIXy:&g" 7 |贈#SAkACJVz4z#Km((=]+֩Tj\z>ƃk6HELϻdogAqw{?-:<cś4o! M1<3&nx xfʴ>W{ct),bZ+&Z1מSD9/oԒrmSJjդ]}J5?N)H)qE/^wWj+jQwyc_v &}u+Z ;}cܐz)ApOw},vffjU-/+}xeF+Z H7 ]h+Dѣ>KNo.O8(U~LKJSzCs; eݴ~߫V^>"[\O Kehލq9yּ^>r_/=?tƲx󙧂sN^9]u! r12_13]6sy` *CJ٠Zk4N_N$%\>C& qCEO1a3wqBn<1tfh'f¼tG%(5"x=oyaQӫO7hXz'}JY"-QF"(ݭz"}JK5 f{5_"_،5_Qq-x))Zm/8='qq(*fFlh,&;R 7OH![#HȻ"on/c~7Y"0ANQ`RÛOc& %h:=bxPk}í!*aql肄a b5ʎg,ڞp *]?PZMnś1'Źtږ긯P }u79؎E) t$ʳ=Zv xv%5P-78K#iAnKrʌS^V2g[Ms,)x ^h?;A a¹z>OSCm'Lh:gh6!VB:#4}#fӗ$Kn:5s49JA'5ِIXP5=pVxp3[LdX~.9ԪFA*񌓗jAGĠ_zEؕ;h+] l_>q#-5>:xSRdG<ۿ}brq*u5˜g79au$hDb?$1fÉ<#@ kz VoU9y$9D(k4Z BH qde"r@y) /ՁJUSʂ]X6ZX*4("Z*7ˈ+K[WɈIΘ~ B6QH% JpTy `f.d'Nm`+WdEL*io/:YYj1#/ת)!'>ESL5aܲ(("O .%*(z~S\!t;օt1_?G痃(ɗu/>=so- ]!`=}x| ~!JO!!\]GE7$tX ]M}w?_| PA \3R(ΖEpF#g$tf\ nsCe 7+7=>QϦv/1c $4D#5ۮ'&(R[)5B8+K)TJ \SysE)Ju|UKEPUV%[{[SJQq#pi}Qӡ瀾!8ʟ+:-p~.ct5o3)M&0DHMIaIgM$GP*/ Xь.b~j2m(9 dgYLK*sT{:\4VkB,[ )ץY09ۛR??}n`gDnȉ|{ə|t Zh&q|kCT/g*8RpdP/<r8p`*"Uoeԍ>\ҵKÚJ"Sroݍk\4R͆ZE}0+;.=|+SK3W eht)SeK8_׹Ue&9I2qն%rf!`:Q]:@HG@n=M__G0BO#T֙AzazsL,5`;%G[!ݩ~١@SZnγ(ե8J}B)%pS-젣 yx.%nD%śpuA=^Szg@CdQefǪ2/3; ѝ`l3HmJ4MeB8S zbE!3C|iMT=A} 28O%&<ً>*;:1\e(P\v GV0:W$ 0H*Vks'IFԧp>̋EX1tNrAYT/Hj/ah!9ۉDK@: zp/^#XrEs".N_!R ;y2hO :?4/ݣLJ_~|q]jǬu}ZuvJiL$oܮmU@L 7YL^H˜ 꽓gn~(77X ]_4}ЛEۢ@xjf3XzWUh59Z@+]++uRB42 MI)&ÍR 12Xi5A)C'YR^X3I 9A\dFSm (\GLa\)01́s, m%1k:3aM]c\cPH˚+D1:AcBu!)$r`J'CJ*,dQ[ˤcdQ'` i"}F4LQ6Q4+5os TfrJNAc˂4JG,CI ,pE 2 B3ڛ:tj}T CpaJp 4O0BA!bGEYa)Cè'Q\ivuh\YRa*GALF' "2!yhp.LI,OmByPD `&]5^ww δHm@lo\@vE-}]:, yS)gkO=ߩw\l$׈Fh 9c  F)%;W']ꃄnY|Ũ-0d8=K%iXo "E-xjGTFIʈ"nG.$ᆞI"hQqEzh:u{w[E`Im_/3YckgS\݆4=h~0VS@:_*ەW o_()ZSꂴ-QYv5)ےLʶX6lfژ 0RkCR`UB7 9ֿ6 eul#[dj5 xBň~5˛_鮳5ng Q\Fϔ"G4^z-%0u `JS3[7~nF3VʤV(exN\qYJ/rW  X&KOL= 4n]gTEHJ==x*ƨZFGqbky <79^d0Sb لE0iOBOOZ"!Z{hnP PD Tʰ 11:z6)e@JLD'*f 2HBC)ꙵ8!F4_@y8'q/2#I'NurP!$MC̔9Kkȳٜ"fz{ pn!hy0uNE+r9NFF=io#7dWa' vvd6)"YbY%݃XU*9n .[]+'H:_{[-R2 H5;m* m #p[.S#VF෡{%ܖMa*9Fi(&zZvwJ<ͯ65޶9:p Bѷ^:Y6":(y%O޵V[&xul.6 >O">\\ןKT?ƄGLU:n Z0+G:MEx ۖ$eD$C(pHWdČ# 'mmp0RZ@2 mX:3S(PISxԨXGI-ߩ(U 6 xb14bbICCQn ƞ_T?KZʾ~9R V5Aƒ vݗ""U~zLl\rB\?6ñ ϣx-T9ŷuiGe``ofDdyW|XeC lgo翵 qҌFZJ1wU92x`2ᄴ22A4m֢$JM#RlDjjDIԠu=k=2>xZdd!΋ZL4n>g}@- {'WH'\ڛm @sT;"A(QP* rZ? aùmE %aKa))_F_#AQbPj[%HLR~)%m}'>J[֡ƠoUa3*]婺=1].sgJzx>E;>~<F-No,fWNoi4~\a]U[!~ G oEZCvq_Dq1J0מ,^'[~ꖆ벐*.x\rI<|"\Lb: K;Tb_>OĒZD72@Axi(*wuY~u~&i&;iھx'*a{'+q#I?%)'RRRN0~$62*X q@,3kO5x6hᅒzp%HxTnTA%% q7 DKO/ 1PSCmĂ %IxY/ZvT#\eouu$X,jBqD 4"^(&AP<#q-F3$)-1־"jHND5E-@S.娉s'Frn,w5D p+OwZ=(Z[Zbaاc7WYl/~نZA+&eg$< 5뺧Ľk;/>k{ykmψoӮ$Cu6펵tF:sf$o@s͙B|}|8bގFAdk7؆gnV٫Һchү 909n¸uVS-aT~4kxptvf$=dӜdrl& !F]J[8㕽\I@PAwв 'nvM0KHO\옋F0bާ/d˷jk,=M܋U'0R<ԹVBEr/h_q_y>:“xk%S?70 IEMrG 0 r D#^eޢ>&`+W|ʹ9ubyOp!;n)y9y=L|0xWtJ;'..ЇC?hebRLvt6w(˒ƨ?xf>]w6Dy%zOFxDFX *.ZPx@f(2+)J|#K` |1h 9skx BX#,8s@ !MS C0B>E^QUղB[ h_K P- |PX]3a?tF$ 4(L>U\)Z1@?AAN d/EJ«Qg;ȳȰ!W &T#~+7M㉖*sNH;RȊTYܑC#<ѴK,)~C[o2i8]L]ޯuš:0nQ3DA<Uˍܨ[n**vxxHa$z-]މ%}~ 1TPNHF8uFsTkxM0mVg^zq89ڰXg<NaOrg=Lf6*ҙp$ YR.&g<1:p8=7(?|ʫXF媀 |k$JsƼ!~-SԨѦ2K%3SsKt5c:ZkYۑmEj[ ԄxCI2y<Gq;ξv1`gl iNTI[}Z`Ⱦ냕<<`(=\ߋt&Ǽ>^ǢHc"LR) 6>e;xbr}\p {8ܧu@2F<~(T2(λ,y"2e6ٻ<)T]=AG6&➂NV"tuV9q!3]B l覚(%'٭41tuAE ZX퀪Dop##Ǚ4xwc0ZY[s-6 E/p\L:T^fy*mnfeȐEs3 GSi,2M-7A'^;SQ)Q=&69:? /=l%yzGs1W]k5@ar Q #&sP&g<+. }_G((5|f6sa~CI3Z(WyA'AP/~kK&zrbe,(X qL_).DN*Wpr';w'#?/b>Jo1<*H⌢g$#YQxg4{^}o`p-#1m_ͼƓX 7w]~ϱb4&U]DPfRw[D8vre:we儨wYIxr{\hhӶ✊Ν?`j4憎;̖|>8|oM:#^{PRxͅ(mpko^ 줡QdEkMY]gJJgu2潰Fz:qstZ_]U,A ^Ds.:JDJ#D6IӭYKRqM#W )uXK^\\ry0?ϮP;Kx˛Zo/U0C@p|,uD?=-ıv:/rQ*? jdEI00K4mLbFH%^Ԍ(&Zʳ{_ Bng}N㼺pы-| v-i_YxL"6L19)Q <m4ӊGhpGv7Rkci[!ao|^ƁɎZyŇݤ>Zn:>=TyaorH%x &Gzrj; ~\|{2-nC:ۋaRj2=ޤ_<5l3f &UhvtK`bK #SfLЬ"xg@![^8cWrW )R~<蛯a<}n2s#}\dB1gMHlSVy&9[c҄χ#%R*&2E6a\ĘB< ޳e[wX +59U ,2KMD9K XbC ]1O˸vUgQN\bUk\d4n.[OJo@dÜE !tZƕTvWX$Z'``XTyU`rxFQ{.u (~*

n '/0gGRQ>vw1[/8!1gd箰J%3S z&,J .taJ ,-midI*^"˧aK>IU q*q(U,&"傠wme{p2z7jq#,R-xPf 咡5 # A.Pl) ]VN`fh||^o*e78)+ 4sN:Q oP&nri3[Sx|/55JTîe q& `1C% 0pj>sF[`(|&Ȳ s5ěLa홏^(}1HzJ*)841(Pw%&??PƵps %\/5bY q(t548!Bf1Xʴx4:/;_\ l}ZTi-m6$G%2pptX@fdЏW ѡfR8poI|%Q]0]gVf%W ˊMB7ZMZdx[43ט&7T-Ti2j(g-$u$zXBkOɭ/bX燄s{&㮱kL2bNlK6q:JALA T҄p,'1$gx2Ѻ q0L ĴơYm~ kLC6baShigmeƇ~/a.|yg5bv J!ݴpOטD<`J z}+aH|K%A8vQ:t`p#OANjb9Rq@gC k je _ f0zdzY@9R+Ӂb5SN+CAWԏ:zf/dxR? F @N5;og{UL>b90yO_m7f 1 1 1 1gǂ_,$2cTJTJWQK65,)Jz`e\KM\0i9J:#qJY o`Phc=rYJհb\VQK7kwyv2KfFw/$@.H3Ip~&ysxw|'(;4_I(lUP5IOFЛsžR朋{[gZ2լ)L:(_ 5{}ef$~*{}åRea~*1k"i\>6kZEЛ:e o/U!YO]sDZR"0dB dqgu1)qb'%p0$0De;-0MP*vp| ~f%(LuH!f=0ISnp19?f~qg,h ) )B y㌂RB>eQ+`,Tj̲lk9*I{v{ ПJLsJΑ)KAH$XT2 Dxjdd AG9]ˣPHkqH4~@%_P9l5k3G}9. S0縣ʀi/94 + oWJ ozSJLfpeHMU%>%("i%3$|RL:{D2@{hs3=Ÿ6=\fw}rj.|F%vѡ!\GkR޵pYl-"䂃jpB3u*U:D| ^ӷ54j$]oЛ0 'ĤЏF?R-8^>n~G7w1^o۽38 OVco^?LԷ)|y?n:Wnpj{՛ﶧE`g{W;7lG{ ~wr{wmI gvig٧0tFK78m./Ą~k]<e)Zo?Ȯ`Ƥ/o3Ëxe|  7Cb`1/sucVLv24!9߉^wRu/.1gf#) x :ã{6ka`>e-(.8WQ_<8 TwUekfKe~r IS%jeѸAc)f@2.t55l ͟7ZYLAY&lP{5F8|Pfqr| 7`\&54|=a ?u:u{e`ᴳPI,>[us-PY(kF6!d%hsd13GQL>m pfkt hJ_@jOkcZ]pDm4p|8?JԾ|}j} !: 3J9Y F R82M ef0ĬU nuìf01%˭\HҹjO[/l]Ees,]߈Ϩ5O(`:G5M >0=Ib64Z8I2b0\}Cj TCjPZ7zuԺL{5CbIwH 7c!8@&b͔.h'3&E pjK̀qc^5X,g*/LY136Laϊ *7La l`W{ #e$\0~;=)B@kz`\\߫kZ^>zr$I!9DbBYUsgs:,&y.JS/EaÀ *dMʹSNfRn &<1nMu@7jkH}ﶡ.}Έ܃>{p#zrQ&7LMMw}h>ێ=;/v^$XgoYWfl^(phV|zkٹ&s'Tk)׋rîW\֡5tS{n% R|e_k;7O}y͛_LX&u/{ב 'gܗ*|ˆZd&)EVmnϙÙ>9]?Ou ]WB>"vQ_=b;XoH.oK8x^#rJuΖlc$=z.MBgF鿿ik_ދ9ioI@JyL@BɘGE!J >V;.]j_z*_B;!//F¿L5T4ׯ:/E3=yJ;RiaxE bE,d{=w9UcJ`O`em*$YͿmQk(.GT]jǢosr>6JHǟߗ2DSH`#C Re4(Ď|_$l^߼xJk<é C]pk(TB; -$[lP)'wPSM8CSFubVyQ]68@ёEsaz^IIP$ nII&)E x}6 ۱b3ȗc%@ n^Z4`vޗ)EAFHף]$6Tg!r>5co#[7[c H p u|>:9 z6Y`se*Ł3@bbNQ͑MUDXA/~`XMp5E-L(BʑNw8@lLRğٗN2l&֚,LNF|V'GHxvb֌W1P1&`CzWat7LX_d.U`;,#d(9UM(p.}h]ml^æ›8G&{ȉupg_Z,N=L d u˷#yOa>č|:Q.?3/|vѻ6zwa[~ \ :gk;ۙ0f'czfg'{s'wvԳa%WzFǸeKf'Zq9-Dŗb籞QY;E#49"ZTU ^Du0i1()M2?l݈YcXڼ|Z[ddE3 ً䙆2+Җ) #x1I2g-Ge2z:IhE L2s,`؊lé-iKsboZnᜐ\Z! Ôe"{=*N4*hp#NlyJv ! Lq}&ܦ7f^gNCkp$1;_l5]3r9;:euvFl[Z5MHY`{X[x@TF0Fs:N*&#{C0-2|ǔ3VZeZeZo{4W]~oj$F_?7oCWԵs@Hv` q cy Yδ$<Q]g)s?̋gƁ=ms҉Wu}Q_tה=wMVRk${v aU: ]R t3GlHŸv`Q( :iH1PrsdЛ+!Ƀk}ų~pjIۧ!fF}c  E1N5I3ׁDA̛)h0LBMdHeӊͭ|LfVlnVlnVl^yLpu ]~•%1Q؅cF3;"' HVzJQz 3)f 6h::<<,&uHrΒLb@;!,CH'6V`#ɦFs^|n4Fsm4w4w<*YzT("Wr\(SNPO aŽ.싵Q*;qP̬^T*0B#agX3'$2"]Rr O^XɋhBP=$ReSɩ,FS{AlGL}suFkm9tCq5 VZ#ɠt'HDt)h[vHjF'OjlO :0)(YBjcӸ3J>:{ϗ/$ TL+Y49hi‡ڋ`Ѕ+rAC6F2T32%O<6b{LW^}jĶFlmv{vB,Sc*Qtd(QrA(tą(|Q =.:怸y\B#tRK蜵/CNd<٘R!Qޞ2dxw{{f#鿜9I On0|/uu.ov5݅p8D֚'!?W WE=dg5o[i\sEJNwPG35cFt.'MVwv_w>Κ9} l 盂} 8|yyutq_Gm[NjUm$g=D=|rY jMFbOߜӟ~:{'4>)]^5JÊ ((KB>C :ޯina.}҇'w}33tNUk̏Yiv~=:[?:jo_SZǃnw Sfbgпί4kw8?~~?B98yDԦmc9 R>HZʨFE{!dviږM?C4vaѡu@gZv\hYde:eŲeޞ]yy}h*Ow4[w8gH4EPQu]?{׶Ǒd,vf7#!][/;W#DJ5ߓ$EuWU]얽m@HfWD8'"3++ Z+Cb5޿/>:>mx=#s[gc>m7LӤ2v^ųe~(EӲ^|Zg_5+R7VQp5) ? a_A.?%LPpC<,TGJn9^~㓥%?rUApWG_U0^xtZ>:;krNdn|~=Jx`c%us}>m;zZtr6ռQ JЯiϾPyr_CPy_g ^ j,;^2|E %0ZB${S8Q8ɹ߸flO)@;OsUl4Ej kK5s{:N.lٔߞ1~G}a̘@"spZ8qJ)m~DS{_3Rj[EexYA/>{/>^6yoxVGˇ<k;'ьe5Q6\R~m&5S=nr߮WKd~ٮ0Sw+ɰ)WxQZPE,k"t1$$ {Dtձ:c\Z3ΕjW411XtLw&yI^rNX&8AZN)"k]iY11yrT@_p${S<[1K?ԯlH̀ި^2x{wHjKf:={{{6rϛ{#c"JHm@"fcDT[ʎd11 x䊲Ԥ.JRNYC%T6Mֆ6r)5t02:e鮞aF@Оj z4u Օ`@$h>*oV6m oG|O?ni,nu 3@^z2U*'$js^y: Ï/tu{︅!< XmXcN.o6RjwG gB!std]7oNO1T7k`K:ևs <#8Hxy2~KGs v*ژmthS6Q{Z$M U%cV(o7YlK<;yN@DkoӸ'W_`޷vdM1̋a5"/ jv ra}RqCԌR45uq 'VNn rI= 9' QP ܪ,;uy/[&77#r@q_8VȐܯW4DqW4RAkUDC3#f$@N6G JR5{6ȠWTTL9R;@a7wn$ض& !ʺeΦL 3AGAT mlɢeV9$ߡ3$tPZ% rewAβ%)}F9) AN!Hx 7Ipn' +ΛԜU>C?w9voFznmɃ,S B&,NI䖊%<=Ƚa( R0 ؒx"I+֍JS`%YK#?=vA=>OgMpū!BB"Q8) R27FHt XWKD:L&|C @Jm(&)LnPm:vU)αJp^=Pχ/9!{(sU{ڜ7p]`KʠE'k$R@)Ue%4 md1_R)RE~Xbs!6gcC4%99t0w6*Hhp:N+:v^흎jF>p}ŔP警^ܤrlջ)t5Syq::]z*ީ'3 i׍Z.E{mBX|S+02v*P[NqTʘ<-1Tw}jF25/Y'ɜId40!,)ѯM2t֚B`nbX0No1(!uoō bz(:(ӵ /$A0! UVK%Į|;0&Ar1Ph곷wTB +tJBhUeC>`-Q!6(<&iL/t,~Ac]+Sl>*ͪI MAkV>G,Clә-. 7R^Cخ-uA v1 A߽ǰA":.ރh"fY!Bo1@Y)D5b z3 )K֚vہ1Uϊapr0ܵR8Z 5#[`=. Vo u jZX5%S_i5MJd}.KRr,C]ʐ4 An L0A3}wfH؆ N:f̂,K4Ser(PZH`@yہ1)44Jߓ5pƓlz@nwp a{hRwؒsC =D+G.r bU2rڵCC~Rp$P,fk w%Hn &+PK$zǁ1` vw) v.E-[^C'[`ac z1i759"AMA$?!Syہ1Aepg RU*;† z{ WIj)ao|XX +iiwH:; $w܅E%Q 4VۿTou^3M .7,UB$Pz%H tɩڷCcbA~wTb:c|GgĤ5E\j- \&RUrEx9/+y%$yj|(iA|j߄o ?bm?NzЁL^'Q>t<40l[{b:)圡A:9w?pҳőjb)F^ tr6m_zvs@! p"{N_YdtڎF؅mqeXz0Z[4$ւ"YM/`g<Ƞ#5d<^vW'HDEǑrI+1o`bg2b/.ޮGU[״K-LQ3Ф6eVRFGM. }jPs-Ւb3S:pFw `rSn-^ !qxIVLeFP5$LJ,^DY,Lerԩ7(T_6cQrd`]j91: L'WgYzg@T5 WT{K2m;hKۨ8<@CHf|S}faѷ5rbtsln#M 鴳hFy렖Sn*orpY^dr:.Ͼp]^3k v<]q3i=_/Na FmxC.iB䢍00ѳ~Cr q7$nIBgkqH /IU?_g9@sޗHs=$wwHΐ=ggUuuUu=Tf%%Ff92WR[ `њkӷC0>^1\ِa֡Ti&h7<#܇v#\{j"A>X1`Mw`paPPATiYPB!UˉR&' OyHq{{B|HO8'ce7Jjgy Xך*C*c%iXt6D(M{U䒅 1\[0S_NS"P3iQ@[ʆ6IXʾӴO"+(J]R<PO(OO*!L'D:p1D5pS۲55C51 /=)ne ޴DiQ-hr'ݹH*ԉwHJ& zDR<*Z1 ԐZPf(o!j mɚH{P{iHms_OVzikDGmOƺ>T tW9Meib)uKאXRjQY*Sr2Js0ʡnә^& =d('x$ˡm>N黒xȾ{"G BӃr- /{ )h', Vf+nЪk=p9Ci$5s+%k\W9D׆hՍ=P}g0G+gC>96I~iPmC4y,  YU%OЎ*RԡS$ >8uj0MqX|}J06D&I񄉔lia$,,R 2 eV}6K hJҌ:B,3m' iiK3iڔ=k)({[[8~{2G%)4)=ZSFzzCh 9Sց,IQ~Z!E%ƥno^x'OPxḭ̍`pn}6@RgcB׃hHA,5+WG{û[Ec˼ DiO[k%4[U=y,2$U{OLM?Xd脩+QC'm Kp0w=/~]8hb'z7S6э}Ust||WI2#9":]t9sخgqt{;γ0}*̆hvhZs51HO-xO"ځSϒK`hSSC*9^m oZmS>7&.SJC0TQB m=@ps%7`rf\ldfmٚkvmyW"DR`$oWrƸ1ScK/hyn $ͽD3nKj%9eQΜj女Iv#5&E`&Ҋi񌙜Yᙷ 9h2^2T0,)TF=r8 wE)sK*$daTT4wt1hެvy[wO 3*ei%K>42c^i|Z9A~|-K>[i ?0ӽ%+7{epw?L{ҷzW}~3~}B& $)hnş>ӫ|6hzPiR"nHI4"7<ٟ@:xF.edL2{͗ vtGd &~]KII3j=g?ug8Y UER]isY,|lpA0iH. (d9k@Ԣ" XGtե/(Iw6+BtĕI]W)X/^k70 /B 8qxBn;oA/jI E 2RE#. SjMV$$&Z%,yY;g1kEk B(q>_kyfS񍑳IŗCWX!(^c NJw9vrtE3gz.Hy-=iJPcs4q@M#pb,rvE#FO&lkߡR`n)VIRT_fR^G&:f? wd:ioGWۛ̿k(NIߟOu8%\Sr]<%CB2OHrE i^JR)4:I@KT# w%ͽ[Up<Κ~U |BmB +~ O?-Wߜ- Vo\㏟,5Xd>l1!_a0R cJrvҚV(9ී(^tF F^O,XSq3J0a$}A*nPߗDQzOz(+L1 ;o/f*tM$7$]c:Av 9aAREePNIe'p}#C`IW"t]&>"šMV#r#u.g"()Ijsd9#6 ͲLT|L"YЇ1:ۓ'd&X!Ca\`0@V/Zab@#tF%>raz~ -x>~QBO3ţ.J([Nf"hbkOzhDٙRDd ɍ(db?9Bw"t!Bw"t]1/4i3m%79C(kHS%'9gLT(F3G5P?YDL[ S\uH']}л=B䦑w6"# ~SZDҝ#F -|;lDT~*T3tNwҪkZf{B.Z{lS J&Vy1QI o \ǰ.Ǽ¼I.׉g2uggO M̒M&b/uc (VP|\BG1cbLhDl<"1YhGcbpܹ9 GnqȘ" L Z v<XZzD дS,8ƈOq|1qhLasvuE,?&Fۇ& u x oK,`K$IīQKgYiZN%+ֲ3u1lf)^YgzeBa9@M^ۈK5Sf=E@o %MBD՟0p*z>[2{_w/߾"of@3nJ%Bў, -d)B^hnfw0VTu+gf=c,Y3&K0䚪 yAh Ʌ/"@}N;$)j~TE55L4(-.Z-m)Twn@T`:-d\R%k$t^n"@fPJrփHUt߈i"Jk tHSCnBY龆|pJrPAk{ׅ0Ieci̸ʚRBB#Qm Zڇ-@ILEHl9Q xSTcͱǙq0HMF]Oo"c#4owTE4D 8XlܾLY0G |qN2OPqzMg[ ǎG׳1DF2D^}0~{WD%ZBFk!O XizRprAz\KGkqhU7ή > o"v] bPMFCr2tܕ4zu8ѴRFs )=T03f`%hvrKq Pz!!%8eiOpIaTAc^JHI*/kDR{1zcr8u\NGx`]^r:Z=kQe>Y'TƇذX@:,ְvG7#]٦)[}\`,&]CW$GJ1.@h٦lZ_~\9ՠQM^r ތtf.%r:ervp@wŁɳ=;ׅ$LxeMQxOzy1C4fhYT'hg`Z' ly"UAbZi:iuq(+L~ ЦSq{U2v{LDL` \"{Iӌ8"27ePpQeSV2qp{U@3- i{HS裊ӪגV\> vxw>lM>#Z{ɻLvzɾ X-9ص\\9*dNC. ZS/$#1!0 rI9XxF`~|ՖB 8_w,ۈbP7 MFyvbEvb*+1AqxâzB@PbqǑHءItmXФS8 0ܰ\@a&]sO`7-3ܾU_pQ>^`':YTMxo{eXwiE Rr@Lrycr2KtDҷk%qgvFA@.I&Lvl$ 6GHyn%K6ee"H6_5.v?]:z7 t=풚W0xzD 3KY/+v};wVNVi^5$UJZ|7|]nVݴ_HRrr*4+d &UQ^1kW: )q%g]pck"EaTupH.^ 9(PA@b; %cpzBN!|*U!V_wI :[aBgquW;H],}ʼnU%4glPNc^&KQgjKh~S:&<3n}3pu{ZxE`T]f=& *}9ρ mc6b1d]Pl}g4K*й,.Sdݨ" !XER{qT# \zBBC\7uc&)LmaьD Ri*aE82e5K1%$o2u_t-aDf%O 6Lnr&%ب) y8 1rvahŒȓڹWHܸ[qw#]Jǿ^p*(N )jID]oP]L&Y%KzkC9Qz}ЍMw4&leݨ>ZnUQhRv` YsUkƆtb/͍ qWx/2g:0$~z?bl\q sOY*qxapKŏ7q IYbs=A 6:okJF1q%!0<5Dȍ0uqg!EX83RnHGӡr]-,ւ*QfgS|Rd6cHQLE% 䅾Kd*@2#Xa,MY9J)T62( cS̉Q7 fniyF N%bڼK׈ʷ)"2.(2> <i gygDgB]ߕKtHĘ(W O"/(?t}D92p0**&Ǭxqs/Bk0.1uf~<[F-k|Fô/L0Α-97sc`f\?H`}|8RFF0 1)0#2rhF*BOZd1ozpwTk݆-0m1ĀfCA+}̶0uŭ$XA v_r(%ίdķ2F'&6`*:gu#xQp(# a"WS|O |րvIVpm4<CcYyrS)1{ 1%0 oUx=zނAcW~a>]\)1KD$9紲-̮@~gf#Ao I;J_KX4h V2/=^[MIYudT}w<ӰkHb{qӫ `ŝ(.r(V0{v=Wa8Kژ,2gr|/R]Ba~䚫ȹpj5e;|^׃`FɠxfA!%V3k"6`M EjL+/]k?tްJ׉6vG'BlrQ\ĂTJ6H0U+QHjWqh\]+jܯ5kTP89%v=X " lS7r7%¯0*cd1x\f71cUt8.ĕ&0JTܖA|&Jwٯ fWICcF%r#/(.D" (oWtUA@構LǜK25W"UUT٣izwZY%dy W9 2B r3p!f-T(bbvbeFK_b9a!>%.PqL +Ǖ;0ǔ1qbzԺ{3qmFwO l2kF/mn-E/%mbqzj!U,&&W22,WͰ1yǔ2_ 3Xlo.GQs"$=iOp"u&.r}HEH Q4\bEl#qEq.QXƮ) چ=GU"J4&ꫠfN5z3K3GcQcaPEc72Va;Lp4aU}8nCZ2KZawo摦H3T YM; .=G#MKϛlL Tn,Ji]M 4Mo }!B(*|oK&Kq8PUY4KxCwdjmQeu wäƱ6`!.& =4GJm;`?"t#/ C\d p{H+ "PJ<ة% ҪPrS.j4FrkL ֘XA\F4S쭵. eU$Z)oă Hz+IKfpgt` qR=:vØaϑQO?c:O ]g=f@WC!.VG=(QH+-XXO0]ؚ嫼j&vIjz6vG'w ik 9>O[pxpz7AgQ!=,ic7.F&2WyF›؎6h6yZКl 9W %5 >aA3F2i[4< V?n]lo0`kr˻+=G>GV+텯ΤUK由>KǡS|uWo~Do2ظ㏠%'m7> ۜiӿtB`%i8x W/[~?[ZݞB٩~&]FÅ)y|[ŨW/ eA%`\(؁'!S=%N$n`{AfɶËt+dzkkzk}EK2x.XA뻟ޤ:giXe<~ 4n]Nd#+D4xpWӛd>8'0'n]Ywwp;cדurrngpgF^9u䷟߿;7w}'Oq1%_odvCMIj7E7;oOܚvFHS wxSpA;npܴ#)8flуOCϞ 5YE~| AirNqOѻn:AN6*3ZY S$Jx]qMBqQX_;vɖvɮH˗:<۵C Te(Cd$۽О5ES(o,tmP H :ʦ#+n~'o jTIf33`k;qڽtd X 0f 'kF/PHui27r`I,O2fn#vJzM &ASBSD !y;8Xm(Ƥ[e.g^MïŬm~~ l"3Qd>mқo2kry Gc4HV1IAV({O^Q< Fr}l ?"u M#/ϥ0ySO@xWݥۄGUxnCU|;%-}ةt4LG^; ]ϱY,:f}7ihq~駙9F3Ͱ~}Ԗ}چ1*MjɲjuVu9:.}9*5NN1/O^S8X` %cljߟ9pRgۯݹe??/ǟ~Bi'b-csU  81{,.qT^}0X +P*1e%#no/{Wrqf^\j9vdo;Tu\u psR of6ǥgC f¥4;wڰe=}W!=wB:묾KɌrvv.0"}2Gsv ё]0*}G .TBuNzȾ$Gw7d:7%B>a*i䎬9r2lo+L0Ywi+WW D$-UPѡ/$gK)ޞW:~R.^uRU0? W7<)Bn: Nox7? e pxVρ@XIOiI(&6$![&4$! /sH}E^]r$EZHD;Ǐr}E KA{$^S_DOW ɿ -Ր/#&lq@ʁs@%Up)M <qpkk2j  "V!{DНL7d!D%{dcj8r_a6à~Q*{s[TRf˥\j)e-En?jhH_b t|VZh#oA` P0 ؂>S,6c[2EC'2=SLMS9 ַ2xͅmjslߦfK۟*cfMF*NC*m5M9mzbOiO.m>N?1'MIcs(A %p zW78AC|DO4y(r]]1(hBmwPۿ rO8Xv6"F-k]G|k92Ӎ \J 蝟œQf3yO¨el-BO.y7^ra3pǗB/32++b8=}@t 8V/! 5BAmq yh-Q9 a>u4G/:h^13M:NCyu?,%~}'ўS&ztsU^IKkr wzќw"m#eKsT._Z:l۟1u91oPAzwTpfkX|k n>VN' uk۝0Or`[!w<RLy “d?}sqfy:⋋* TE Lџן|dzWx=M&hE(J`@K9]FWPWr#W?S" egܭF LFz QHZώTb،& &AeϮB&{_>&OmlPra !xՠx(N ;uktÆAۄ~x`@m3Hk;Fe8NYe4!^} Te)ۆQ~3FY%EtƦV*$ρqQ*3$ITB1ilwȤE f;v7dl gb՝;a&&b d[77S{m8R9`mPT`,h1cR? -( nrꥭ4Ϸ_v˓<ΊN W>FSv3)02eLRw y*+lon=_C]ѭBXH̗D%cW-Q1lʁv] m}JN(E!K^#- aT޷ˮd).k:g&;f\ٳ7iN'~WW)>Q`^7;"^68qPJ%vkiZi ټC[Umn:)L:~c7] `&e4u.: ݶf= 9DP\EA;Y(gPצg{G d'4=Rح vcOHOkfX5̓ *Pv鶲p ɋ'duA.*Aȑ&wGs%[;#M""31%) }E6=c`!eb @Lҩ‰A.."2$"$3fl1tf8=8R[2i*CNJxx]L)0:z 25`xArVic:RE=-r({ĬZs"9^ 8c:}Q:6*fp;)QN Z\T`) A}WB,BE6>  h)ѵ\ ۹Suz}u?o-Rf٩V.]NtUiFR_V?VeT?n6&d7|(3ifROr2;n(0{IIv6)T>̌VNJ1v@S]Fv1IYɄe:@86w!Y^/"kGyV[=D/2]𳺕`˵LY;[rF9eT=&p)}vWLn6lw";ï{\W|}E.Uq9ֱ\];?mEi~Y\R1 g]ތB騽4l/F3Hz\G-?=}s4 7EPEc>U&R <'tvx@ ޳c} }JOY('9۳h{ ̠ZyiP}-x}+}{QKmaw݂ZfhT2Krycoc1M͖ͭ,/ؗQPM+C&b·i%frLuBjDCdb0냲_t^mVZRweqH~Y,)aew1X,ޞyA$mH%o>7XURu2+۩dA2~̨̀4RUVXp NNx^xtR`Ec}kS ٵ6EN\)4O$9PHdFyN156p'Whnf0h~NTjk@E]_ʟכkeAvh.\\9L5 3]Ж#T?$)wX4XEFFy+ؐxэ P9-[i5OffN7[w%A\T - P9Y;hkd56weW]u]!fif_}Q0Ct{𔥒R 3N3R<<9O?O zq׊-'z.]bwq|K[5!){|@kzzqُLR5=p1$i6 88[rAYzD }ͥU_4][W-ۂRD d(JeML RB%D^XrrJk|)N嬯N pւv,RMX&,^& : 'ԘPP2AAz+XoB5[mքB`NRG?0lfrc\'\=U6G:h?M'l5߀sʅcpBiI2 X>_ V߱oӠi%~EIU)O?I?gPn X*] :"=92xt "F먝[R3= ;wvkfS=G?jUXFnc^?ڠ?]փ?K}s_tv|X&ɜ?6Kw 뛿Y-Zг/:{?sI2I*D2I*:J} (8-,Xt.^zZ;:E+Q }"ʣQV2{l;PųORu2L YUyIIb>ۘtT0A#& *M{Km0 0G :=j|>Lkn Wrӎ+Exr۴@3h,. u%HYBhdrJtVP ˉT w_³]VRƳ;aoVJ^߸}TRD$. r\hƈ?|5۷]~O{YQʾ^[Ȭu싆D9sIBE!:'e#H_Y;^Ȱ zX[Y{oP+Rjᚂ7!D5 yjL5ScZW)IY[$ّFbcELF#IFd4&m(nxd4ߖWrZj[fULup۰-Z{QA öUV@֭biW}Ү]kRL&GSb2llpOny'P4Siq&Fm4Hh/ioi542'+%ni0EzHA5BP܍gK;dH KKS b Ȣ-mwJFafSne~vw 9诖hOَ_6NsVCqb*|E?JL#F+֎-H'7 lf8ݽgWVߦ_,WKtް;2:{ u&7 w˭2w i)O=<|Qtb)2z9&g[8h*m`o|?9JKLmIwO/K{'ۈ)+(WI%&'Jm~1ƺXth)cU%xUZ$R[>aSyV,X5Ze,5aI4Y5}VЃyz!_m/4o=w5vX^I2kOGGk~~vq4x8Gtu=?Sv,qqε荡whL{^`zU?/ك x Y0eEgZ-8GB1λnm{J~M3_v|Oñ,:x=u>uEW?;#8 /N_ *'M BXO/|X5Q\mIq U~;*6JF?Zmk !YN9 ,ڮUUenߒ׿G4B#hKC=Ds m0~{#JTW{u )v QXrLHGs%Ydb%LQ ^YPY346o}FjYK8xa3<];W`|UXP̀ʀsUŎ1<LYgh h'ۍiـT=X9333K}k/Նd2rwSܲ& G.Az$/d2*#7QMyIj hƿZHxwEu_ (g楱e lZU"і*g ݌J)Gr{ZZSt_]qa_KKk ֓Ne7!*diy+T(hR`q26IT>ȩ1IF4/sS#>@RBS4|{\f!rW0ny'YF"HqAwp>(% sV(-h=I蹧 堅Ԇ81v叒AƦUh#Y`*]jt/UzګRD5øu$i*&-ӛoP?}{8? (ONTvwht{e숞j'2k\HS\M7 ![DO 1=GW+?b <<DP0Ė5n̊ntSpm`;^f=7̪XXV퓄wlaSxɵ3GՀDkL{(*D ((}TRJ/r@WsrJ Ջ$X8>@Y YHE( ph\?"KU,V>iyp\OBcVV%%ldkpUIɘJIfs *:p`s頪\065'xI:#ݴ&$Lmwf<=TiRLkaS;r )N͙΄6 u-eO^U=nF?2W NxN 03㠽Zh2]Xhk;;0z.KL菞n-2:Y@fVxۘ?K OfLqF(oοHO9R#5I9R#5H٩2QI ЇJxϜ-Ao2Q*@ةo|m\rQ/IeY3=^ZJNo޾Y9l%yU0ֱh#b\tJb̶S佊dj.kWd̙F yyK(dyE Z0${ڠK+5hjYk}rF_1U\Mmŕ\_r$83#M4c;5H=%P )ɣ/"_7ݍ~TZU=_zr{/C <:ez^9|HTpO,jf8+1Jb{Rf/hN){cf$urPj}xMHe5 Fs+0ٙN~n0_ !eYoq@19Zq s11߹<LI+{u8*䪰-1꺷svAN]AU1Sԝ0~zC3*YMԊ[/kmJ%+ќ|EƝ ss*+`ymKT%HTІD-_#WYCȫɗ-ʬwxpÛ<Z E|J撷_jZuL\lz\HI T}L] 璠T#6\R#$.4vکZ1Kv9FRblc_"e0~8ujɃ} ҩ:+2m3~'MS8?`Qۋ,A5<[/GMG&w7^b&!ݠdr p4q'LbuEQ%兑=nfRE&.-RD"R"PsYH(#13 "Sca'ǣN}+y{HWe`7g5KzwZ@hZ0H=`"!0eZl[DkPYZ;)"]XqB:W:05;M)Yj$׻.0ǧF#9>ݲIKBUPJ%!Hv|c9(`!%z wކ$QS1;ϺCflddzphl}q|ۯ!810ۆyS%hP;1rz{<Ԍ<aBhw¤a 7<#PLgڃPJh?KXywT TЖ<,yA{l ƭ+?r+}k}ul:}5cۅwi5描6:'J؅܀ zlǛcCo?܍5+~ ߍd0BvM6ԋZ~zNŰ9iv)*d?Sz}{S (?3arީ9e,A\>vs"U݉]qtCUXߛ*$H@΅./yӫSpV,8 N/FP (qQcsZ|}BЖ&LޏZ*-;NKzntZ7kɵXTQ$?zvtv0`#E[wv' ziu|'t.wwkޜe'L"P89-Rd\a]V&B½.6e JU͸! 把0V&ܚpFh1L/ -bsA)(&T)kl\\@)!Jd_U!$"\za1GνeeuU, (/Ď/FyOٷ ^(W+ߴ|i+0-iJm *5JVLe0De6xNHBZzR2XnU0``&rmY+d;0$k#%6AQ@1zRwD9 %{;bxaJ|[EayN \ qO4QC4RÉ'JjE4#"UP\!piZ82GBËUHKfLf*(SQT)h.ԆS`:!v1~ w*/N bhx*# *!gf;DTj::;l{Y3~̠Ƿ36X!Fz= *2@]3YC.j(A8oa<8PiԢ2^P2^k#-ozK17:/s4גmɶ>>UOK385걺GBpuɸ{mUm@Q%vaaJG ަ9 dlUlMuW~c:7v:|+0^[^|XRŇ!~ص-InX;nw /ڟ)ZT̤̀ڛKu0/iuͶ9zu1]/|ycM !rL62EBn-g{CKCE8 ]=V eT%Y իC֤8{3RlMQΐ oNya q޻f@Ʃl̄\~Ի=%U13/!oCug%S7?ҧlq% j$ uUG8:hI> >{WQ448n;H)ߧMyȢ S"v,@QDDi$U4ZgN 0ORU90no5<9Zi,\TU.vJ1="&. ǩ qhl'\Ze`G VW- d57sKXEPDMSv90۶& ~Ot8\5Ʊ޼pvIb'ѡ$tlyO~3.Wv={5ZK!XLhNLQĦVl7;OwQCFGmWrM$ؓ:DZOC&157^4%4Qb:7pנSeajƸd"=n/^4!0Qnw]1 md/8JGx,O41`6O]o\ךHw4pWݥE*Ru4sDA;##D{F7 0 Z{`']>Ws(h]֤5vΔm?ֻg 2G^ĩԴeįj~0u"-%m՚ ^ց*GPrz6gR+$1BVL r@bGICަSރ]+ڔ Q`pVf{ɑQ˖&T'YQkd2iKqʊL 0Tdr{sÔL2C nAkrl([0&)hfE B0/\Kr)Q,R%NtZ(h\i@,zRF}v{Znwjmk-@grۻ/fI]jƸ<P0;;˳:25vSZ*Ib,b!ԌACR9`}Nv'U=tP}ۇ!ٸϩ#2|W(9.(*ū@)5;FJ%e§9`][7+ dsv[2vc`q`c&%D8غL%bIgb*Ⱥ!bLjy~5)C8$I^r+q^Xİ/=̍Q]"F|znbi]z*܏=](y]wYJji}f6h~8&w󅠥\oA2ߛ,_ѕ^ {!eZk :&TJ]Z fU"܍8bwdeU](:U?gm%(cU dE:-Vy0mӯgAM}?h/~sݯ}Ԫ1~D*;myy~) k-n+Ǎ(&h5fZfW[xA# "0Cƅ* 2fSdf0}l*d=([+ch g&5޺ѯ 6 K)BZ AYɨs^h1 15Z&Ȕ1j6o1:iI@` G`P )fRvzb]ƕTk_-hZ (xwKy&A!046~ݯ%݊,aWvi o=4k"n`{2iaf4Y|?@;rߙ:/ka]m1߫mqlZIO'|wwO5%L%vhta9(w\0#zjXy|biP.휨9~ʶY8¶!"Ph/޹4v믱ce޹$ +RL6z1Z@8X0A'䐽xիr2p0cVw/ˢVwy3^>Μ麠"歟=/‡)8ywq6FC 1RWrZ]N){/f$#"wF|9%rѢh p.-7/c´c ՜# aqҗ`PjXarز4Rc;6}w1D-53؈JO2;U1ƅ[i ",B/<ƢtPoЋ1{f" } Y75HdV~JZ0 jAΕv&TϾtBZ)Բ^.@r 5bhQ)̶*l$.irJ^dLCIhl}|MVDnEW҃jFw;HdR2R=_!zn1$68Xp{ JSip+ FDspHNgB{zrڜr|;eq-L+ɡ }1 Q GhX#~Ei>):F9%L)?ey[dPH씒;h+rBm(&^g]=:SR NypR[yP!Mnj5{br0AQi+g[m;PR vִdPUc85d 7QԪ R *lwApE!hv71:=|(!|5sTXv"|xt!Ky] p8_!ߵϫס>uO~_/G]^(رh)W #P dy. dKZf"ܕJ0wBJ̭*{We 7ugGLFQ"pͽ'mJ Qian+Z|ܻR^,Lk|aLZ{w6^?؅=. (y'?}:!BquatӽޟX﫶s,ₙ7c̛yR[ØRQUPLaץagjD l+iK+rTk ޕ줤C|rH7{>L<84nNuO]F7}՛. F+abB3yH&֨[o*h2  =2X(ML8aZT}-ZITHQy $[4T!Bjtd'3atUd1Q\iZ:D$”r\Hmi.M1hn(R[U–rAtr0=*ew/vK FtRFsŲ>wbX;D-Bvu|{pL_^|0QbC_,ߝ2Ъ&>y`edtohђ^J\ yWf jȕ% ce-; moͱ@dH?%)r+.5" KjB.cX4eA3.Ԕ@DbUaC 9 B/4`@d1K9<`4,EwhQ&$HZ>WVA36Cش,%{,G SHJݡlBk+pY"ʜ)irb.wC_JSfE"r@A+ f8$}9S 64(`A*xG+R}i  1xZVJ*$ p^{ؠ4瀣M95QBhR VUUWqU-K-pҁVa SN@`p@ U9p70gQ _Snx$h|Ttyd΅V}香mH~ذ_)O=ZLA_'a:h[zuk/rvHlxy@Kzlp~[1Ĺc1ފ Ntrޙ9JBZ^9 |B|O~4@:uOZL/3oBEw\lS_S?yy 1j= 9NޯN.h_e_ #R{q\u Wv ϱ<4mXj,}/ر ,ԴGcga%⊅1_6 yYxњq7RKF=|,Uw^;mVOwqѣ7dx{ӽ+dQ0a{۾σ:#{N E>}˜Cw'/uN 9+$^ AD򙆐vZS4I]سѳC\;j)"v)2PL,T МHZ,) i\aTR%hZP#'% (v҆hO[@tپh~K,ӄ[S Nr2r Kd?Jd97O'.ir1RP4Ɣ>Tɴ4]kRE$b|*O6tn !5`K q_o6z(-zC9(A׺4>6_q_ʛF7m ;XBE,iϭ xV f034+c͙EfoFHcMrUsvD.Y2V8֜i*;Xx]]yPU/zz;然U;`WTEPp `ch4m0k"i3la Z/f4ҬjR/}$'NOk߲U\9vKZ4nd"ZHx n<2ӌfua'$)c+v< 3K2١C8悢E7wϫHc˱˛$*d7BE|w/j!2x jf$KnMR5‡Hw&-秣cɛލ+/O..t*Ka@ ^r,'D! ky1o{\#$QQ?ðj'þ:]ͅ?"HOēVL2w=\T 5n6?~w_+H^ 4qYcX8iKμ(T*9-d̕##+nVH]+Ȅ oـ J?X8l6X6ez90P$ZG ]t}.~Yꋮe{b>[vG. ,HQ!QMnjr3¾&wQXc7sn$փ` "^ M RX_WDƃ& 05A;W96m(f)>KRO)LlcdZ!V2—VMQ.+d064v'ܡkx4Hu/y{ltAQ"@&n ٶ6cZ mc0S4с h=014rRZv&|v{Lp3tST!&;gy*$/]isf9ϕ8/,bb^`]3V?3 {DT1d _71Vx= [CS`.[{}$e rU>5I$*{m5蜅wp?&9 fJ ZiI\+M0 jʚ!W3D pT|(T= f*v>p ?,GlKPI!0ma- 1bplaٝ^p=hJ6WLh0F"P }Z?L'b,x }X0 ^'K!HwGK'@Sƹ<_eDbKgʖȱ0R1ӓE1{^P @bZ\;"&L IÖL\\\wf.X8?[ N*\ba9it9~ ydaSzŽXQTߌ-~GHaSKWy;}h3KH~xHY$垞5@'!!=A z0s|G,G12``c.99 `S;, @OV`t䄞̺2̦mvO :^j'^0zx E!'~KHdQlo__ jL!D+.~V %$맷P<=@ Gkt+8o@ñ btUXь]?ShUz]Xj?))K5^A … 2#I 5nZ݄MZ݄MqkU R\;j9r Th %FdP*nնVVmo>1㎊h ^C (;%?ȥnj'p‘{תs%gu]3s8e(6V‰ 3% qIFa_a ++rbl{PQy=#ǭrHxfsӚ 맆.@1;a-1`ncym#j$ҪƲ,L .$ m,<8= W3X>mH ! vc61c~pIZ/J`'̌r{'CGEEdLBȽqX)b<òp|7ր=!+ X ر+#` >=Tv󰎾:#4wCK 95DPAֱb- wVhU0ǘ&Zz|#& 'Ъp}09P!Mrέؽ mRFA1к)q@-F @˭dd;i@f6 ; Vg5``F0Wa5T:"k=.u$ΦBj9i a&;].;\/GN>+u`#0ZLN[~p&$)9\{!G^)A f/u8hZ+Wr|EyUy?XOlN`!a2xk4~ hXz5DgݡPx OwmNs&7)[hsC"xK2Lk%(~5~PcWT2P| вS[~/}^rL)Rvv~'c%{z;5 CeOwBWv4QaF|u1/T7v5:ѼcH:E-0GK)xSgSy;`IIM$+aYESJ G1u4^}^+98P-{44{--2y]X~ꘜ7 tju!E > C#xi;=GHgZ 'IBMb0lWȭ#*n|׏<"y/a͑ O _3^P*^ޒ!e\N)(@NM:Jy_ϖhv FcIOr~(Qv\ɧD~yE=wZzRGa#^8oQ4#_zkFtzӽY.FGE0J!(%g5l3B]$%71e%c>W0BocxhQ4ֵ(`lU-mS(+8eqBGyEZ)"]z;Ae%4B&uq63HܵV.y)CMQC3ǻo>e%ǝg^:H~ B^o+R8xi|LAs;ʝn>.tzj7iK;{<(mfTdY)#L"Z}j,'k /1q i5施68)^וm蝬M=a'AS%a%\ }S,C7;=δro$iPץj(jݻ6Åʓg Av~deRe9ƵORLX{L_$EG(W)IA틹yCXR|8FOvHlspFzS cbV:6X9 47nS0I} Ge#ԉM@X`ׯyA x_X:6{뛾zw@тYO/Fn>^Tw5sV9_jA@8;~;8ڷ1b,#ipxR 4 _d,BTA3?DNrf]ʠxိaԚ;5/}&.H9Ox\ۃ@J&Yh ;xO_&/M\;!eFVgh%4@U*Űidwϕ|TdDy7ޤ"޳'rWcZ~P,;JxL 4i -¤ \أB2O,W}na0rCܠƴldP oΔO ؟ޟ20)1E ӚSYak>S`D`,#*o01!bi`b1::PNWc Ppj']5:@Mw!x'WQ[ֻʫtYhMDb #~i#'GP-oAq(8~!fP)o65yS;[]ݫa}g[|wh3Hʟ=5x.¾ct'Yȳϫ,B/Iz7?]_]AKx-WwVMwsq=ڤzDC5-{t~4޵YJ DmZP2RK!{MJ'l ̄k>,.usKrזP~pYKo# }nN 2g$7BL@)Pm+\Zˬ3/(1o,yes@XTp Nx@j8)Ҥ~i#213 djB/;Ewu#QJ-th"YDdx?&7T 3hY(w2gk!*X+*>͔j"Hqa^1TJ-R;)Jˌ t#8Ll@FO/GŞ鄚P3DP'L00XЃi MN8HJS'Ԓ#@_ !,&<%3,RD!|`N>?5xNrӀZ!us)}7_!!\cD&SJ+ENr$f?49"|]ˤ[&nUݴUFYc pF630ƭ+5Y~M6xKa&L]gEڌ9dsʬm8_b+Ovo0)Vםր`'$zEu ѩv2CxqEf;$[T;QF˵vy/Sj.+1 ZC} v7̙̂rfb)B_]wp5`fʬhaG W1_ar0?wEl~^lȩ$M-LШeYaT> {G;}(˱nbU}iq͐WT3kf{WA.  4@=QݔA\mZ°8c4 ,88і8D;ܾ4xm"Z8{AUJ[ߒ|qJGKj$iG xuhppA-Wnǻ~M {+xUg:by! DVĽ%Rj< G_g(DѾN~tjӊ%iNJznES,gᣒ+Snk}!QҒ%JL+ 6?7K:Ɏ#GDT%e0yn0 clD ЃrCKiAA*Tg kO G]!]ATTroF6s 0Dmm :DƃO%mB&KG;'!0*G ҕZƒ z-k2C#%G?6ӇQʴQB]gd%&9r0ԉ\0S0̲ & {F֓,YɾQkAhƩۂbZl35@ur] ATg&;e!*I14y"U, P8Hz-xQSwJvʵ]pl^!ê%j:omptHWqАk3ȍd8d5V;2vӚ^N,Bmk͓7 >vAX\܌ hu?8eѭj>i}η>Ie][bGǭo'ӹhlII{٤f9+5}ﴚ4|cmN>\ Mރj^{t'aΓ芣' ] U^/_Vbe#ٺ,LߺUA{E ^6V'=>Fևbe{]P>?H>.( vTjsO9‚ea[y@`{c)b;Dbw;@r}V&- IRU4|ڀ=dɧɧމM%R ;*rǍ@0Z 4u 3w3PZÝ}.M8&D?vL:ɷpL#WhK~wn54g`F7gvԚ㧃wó_3:N;=g:!6y=y{pv7oOu|OߟO>]xo^yqΑgM(>~smw:^{v7}{|s/㗟]鋟z_bklU=׀ˋu{tۋ0u? :};y?)d,_ ~)G3ي_:M 4\`NF4iAډ?~}cƒ^߼k>@WaJB*i_CN;m[؈]FP5߇5ư} EVOGm-'d:_unAۍϳΟrq v]Nma?.cLKa3APY\zj•7ɭwR8'5\".C sWt`F+:e[~ݢ1xđ9iCߎfv"[&ˈ1@*S0KdF =A|>;!K uNCjQvm@GtM=޳.T3APϰA IbM;Xo[L0Fp$'7S5n ̂] sM1L\{L.=&{L5$J[4"l5ᴋVƠ*F) :D #l;MF`Xe9Vf(*o+a݆jagj< š+*b8%^ n)l!EF) @pFH"4zB^qy\3h$VJ7`Td1o=ίɊ#gYS~~|}~.]fXd/9]2 A(6Mt0uଡ଼^T?QM?''aDX :%޴S "2 n?y, Gc9~Ypw؉jFۭ'n*IGVP40BE)ƒA}1دn]+kI}crƗԧL*xrtU*c0K FHљ^ӽJJ7B\q~* 6g[I.>k71> z"_>vBpŞaHTjܜ#Bbb WUq@) &Cs%G9P7QbIe%U\fJ,8ra٘[ 2O (8=3 5(h늌E8PC#jJ,zEO&ӁU.$ A18V& N+lMP p@ @ U@yc,3B 18R@m=Y +@ @0p3i*YHL\ET#y &e W䛎 |{pW۝V?9] V4a㶏j ^"jky>6)#IAMcAe!=*jA% \e 44&XlTaB3 J`lx al f6X[ Ñ<,hv=Vy5@[MhTQPEq1 \6^Ta|: $As@J/ n8\HR RDHd 4<`+T>eHly9Lі5:/Ɋːt~(Li E|rѧg`X1zC4ۧG7{8=G?ihFO1uSјc݊qF5"yPyHAh8'+ɂI]$<72ɋt͂tv]1!jeߖZ*{t` @8e *,nL-ʅ9c)/섁* 17jGU-T^:unUr Ƭȿ{BmuMlYGQ*4iEYOu)i?RIlw Mkt$s;El) v7-cg^ҕpͪ&UQC]~9w:t3 ^)/owWBTZ+1[[w7]09BO szvF*%/ )OAN` xȸJb*03; ]ibX|sl_ܔhҰ* 9BZ_ [EN!z cϥ6dtK*(}^*u%0\`ǔNV Z% UmΟ񪖔eŋ[NFii۲Z4,2xw:_V]Rf+h+<׆Vext>$"T yAL)0$\I(ws;+r/+o VY[ngGK@my9 :)^A\NX>"[ sH?( ޏ.4{^?%?+駋sI'2tR ZGKQB)k7gtme~ yoXk{Տ_/]{ >=?\R4moLrkQ[vݝbmW8Añ)b$NсQL~^.[cqDZ QiiQx<3G cEyE\ ]?igȭ<adek{mtA5:tQ"s:njPe#Ic=߸ƞb p#>پ:,9!xDmoC)j:24Lcpb~a$ItfB^g7Eb~|qWA قΞ ը4h)q2ݪvDAI:jdPp!0q6$ӿ+?_ -fȫ/6pgF$2]x Fan_AxXLpқ{FΫ/We/~qd7mė=3HuIIJvߠh_3gJpV, G]bߜxJF-4x/ϡl{;O?]hE'I9)Y*=JPB/Hjugk)[`+k6YS& hA>3JtGP NR,Z[\^o7:+NzɧhlrC2?OlV89lwP9Pן2z }جۘL˛6`H3&9DG,1sE ;q|҅3@ös͊yKXm1/o5oVڙc2qw$!/͏7b0c\m<}l(e{|!=ߘ/Az.]J3\#n& ֣Δ $8Dmk؀t'.9a2zJ2|fİe=Uflŏi݈`_HP$,JFJXܑ6P)0D׉"' (DX=#bn(B͸  PL^Rµ2|_buS̟K# uX?{1kC#4dս˘8w;'{]ds sr*"I y+3y$!>շ>;${2z%$f :TqN˛3 ,ԡT`V#ȕ$)_n0RA7VL_rv/6@۱DY8Fތ`K)7}F)tؔx@{æ@ ͈wrP <h2$kV%֐\ ]Z_>m5ۄE|UoF#JN^ ;?Aު)_B޺/ň TkSl՟/pryήIO_>?*w${RXC[F|cvĀ\'Dτg҉O.O4k[:ҮXx9$^[)z1'CG4U}d{ ,ip5 ǫn}Tf- c<`maj=Sq.nP|xKμ$9cjeQN Yا Nb;|xo%/k;{1[HEk F?f鮳O/FYyz7d4L%90?2D݈g-ɝ=tgde37 0<`ɖiB4 Vݗߋ7J5Lf!jcM#+[8E E~ 4|T4t'v{LׇQAkFv}hH9JUTeMĺuiTQ꜐UZLQ@n/A4 @4ꀞ$pCr؋>X%^['f=|}v-k45!Jwf(da1 [yHۊ7e[oax4 jhr*&>ŚLjKk1N Xol3u18W F-A#9'T$4X-At޻O>=Dv8ȷ#Q^R ɩ̑gU-iKK[%#OpĆy-b7H z.Qg򜡼DȨ7K,ۼC_|1A|IrX%2wqNj} قuATRZ541b^~XFscv [_3>rw`Dld7u߂01Y<^ 7;;i^t}}-l Wxo>1:,վiOB)k<87=ߘ:ʥ7ޓT:7Di̬zx4QJW 됲O6Ҏ.[cuFa :7\ێ2W `P"|Hѵ/k X#-,MSXtYJm^`@޸=WP60t+GH-ST(=EkgR42UCjTa5!vŋpR KS9?zq53X?DlVW#AwU*aO ~-l}idQ so2+yG0󍝔;XCc|FczPij(6\Phh:x9VAfu٣h_aF7~.i_lA{qHH>Z4D~ҽ"(}SKhMrPINB$L?H)J:}u^]+K:KV Kv֍KTXqB.^\9AU= Y3x; 0ΆDN;s ]2cUz:b,a4ŭoQɝǻ9BU^'fl1ȧna>Eߏ"̯Jda}IݘIff_:4 Eسf7pXצxSUaG'v ZWz>}*;=F%"I]s I[H b漢ۿh:ߎ>)FV>{Ðh:'ye/+D\ u|tY5mGGm=\qD&I;yt+Kb ;'DFNMng+HG1-Y1p3(zw~uփ7kjo|0L.VL[gLinX&#įxo2 zU9քq ̘ e2+@*G6k~/9p@u8-: ;Йw,Nj/52cw? M<@vUC1f>6˰Y(.5>z˔1 i \S):i;J-#åF܃'0AKV9 ]6,(c "ROk =ugD}2Q C:zֲРxKE53JHq fjR LKkY!0b2kbF(3`83F `[Y$+"0X\! Hhn$2 GL!.g)!e|,$^rY*cO%>!k/ w jNA]* - ctrҜ ,>Jx5}AP%51uHy 4ҡqgh`?Jr^(0d*"0#Dbm%`.mTR b 뫸cp5n̗K>j*倾Bc-5"9VH4Kx6h&hH{\ۿ?:CI9ƥ3&Ӑ͟dq"n/wp(T:j`Ι'C#yLʘ0Lm le s{7f_~0XX:Y4vҋ!0{a6̈M/Ca?W"nqb3#@/J'?3~;c>߀ 0&''^xV _vW'U96q{uq0|f *{};6Wpe Zט/]Z`yM3s)Cĵ)1ĹmyNsl`c$9L@J8bDBry{ehAdd.>4p8) +̓K*1~[t %BIGqGpg8.] w`0r.d̊",w{x+,^͍\49ȈB\j;!0"95D!g[ϕ TPeK*U!?8\<ۛXc5tǓgN!iX~fWP"p s%o#>|/\0 xI>ADdgOzBOoh' o;ç8Cl_;Wf2fMhӏ!\\'[$'!xv_0@`1mz~q[WNQ>Wg怡i7f^ ЬMqesz b/iV<=By3/sKc\8Ỷ`xi?=W~F:Oťm \I**Qp7[KHlPXu,ݦ!$8P@&ΒlYLHD%Z{+PP}*\I mQ)zKnp*ɍwx[ԢfݳhD(f0KmrSO0䠶1ɣO}/d6aDo`7ǹ?RzgƧW{45Iab?ՠ1]v2[~0ws{Q/0zAM]ߦ W+e'jyٴ<`XE!/Xĉ6S?'aڕ9~5O 0-qH<񧜭 ٓx˥PUa',@XS[ޗ j}LD ,͜fQW w2Ag` y MY9Ăadh6C& u)w]{S#6j'} /12uw[^$!*VkYP;[nQoMxp3=(yP'anLҝ f{k:hI`e6i`݀ !~4[\}ZĮ`,%1>ꑵHǓ0jF?!fu돣ۘ>elCtErH`!^w;\./'6B?3TU7KC?KNEy/c Y8K3s4dؕV\nhb'o\M/[}IҼP)cLZ\ZaJ[E^"tA0$DnC: bO'J&^y& ]Dp"r&}6ɴ0cPHӁy5\⩣`/zqȁ Gț'%ki"Wb& Zra< t|xDWrǙ-˘,Ӛ kͬ4@kCm.̈́,I}k/Hs0qLc |zm\p Z'`_ޓe+=}?=OȚ``t<6L3ޜb"O3%:[_g=?0W-_q_C~;sfܲQّ?$R'XRl5Og*H0J36xA\8%} /JԬp`'f !%T麫ss!'{$Q䛯]Aj!Nί`Fͣ9*@˽EP5X>Cdc^>]F6dٓ~4JO{X>l} [+H^M$qj)ayw-e| 'qV~رn 7IY5ih;(_ Z*{FQ6o] IKg*֒vV H][T(*:;'"Drև NlH袅BT7vS^bC"Jn xWS]iD2݉C撪X{AHs +fUЦ+ u,@#KHh YP!U\ \x1a2 94I??Գ\zXr]ko۸+F>yϞbI m&b[.[vmmCHc53CrgcKI |&šAI" 9*ãHXQNUb_= Q;:tpȶں/j.xUMgY$Q ~,MIԔ#MTdӐy@*+9H"VF9+Vp.ľǾ c5r(EQJM$'saus8''F`H]?=TgE `PEtW`rJlbJ(VQh Z!AVΐ|Dfw},: GR,9:[pP\ +Dyc\Xjuz =ZZ\ĴNAL?!ZZLɴ$RO|pnEsz L t_G$7vE ڳڑ"0ptCo n~p`^Z7;a*i @৴>^[co5^z`B3gm=niķ0u~lx!;SAwd&uρiO>xA;hu~itr~W v5'ޏbo?i߻q<:{/[7Qc'`??.| yObp.'&g/Dk ?$Pb@8Kdp心7־xLg/_^ I|}uÿ/Ãsѷ7f/|^6o/a}aQo^[w=i8El]kpE^gmcл]s4 릙菕ōq?K:]s\c|@/)?H,!'})r`HU c]xQL8#gJBFDyEZm]:7g[0O%~I; fCneˏRQ?w{ŝMthb?dCLӦ`2)w{xqyy<8./:Wߜ i=m7Mu Q 2E xԼܢQv{zMגfJܲ7_6( ZUWP Ta`0& Ӟ7;fp8l}oVT́ˮn2sf0K0A?Uhgǖ0Uͷ$m:^םN'HipDpju_p҇uM]f;|gɀBB9;wN/{ЈV?JE5{ <7y~K0,S*ۺa'>l_>}3'*WAg` ˹ 2nb2q?p﵏8gSqn .=VF #t g{/5fLYuĐ[k֮|&e<8:>"M{c= 噐`|!2}$4VK|@2) !> c< FcӐ98f>Ck{_^$S=ӎ<=:}-Pk> xܜ+MP()hoݮODQT$hHbpƸ׺&TX߸Y+84]nvɯ}gf{;K8X^ 6VCcqmd! |8#SL4(@7>@! Y$Hd" ʊQlL ~D(QTlW9H0a:`AVGg ,ckpf $8_:4v5\{(`Uٝ F+ccQFcL[5?\~zP5:ܫ NebDb M ]Fb nr eµpAJIA"mL$~m Q^ e9T&T'{yru|rqɐۨ'Ş8}s3_<,៧ɛ:u}}~x勓W#λh0KF&l致Z_{97N 5ۘF9~ej:*qᓓHǯ0v|%e*I&)q/S+NfNj{ VQO6O ,+MJ V;QdOHs> 8ݯ?5]ږL.㸿s$]]~Wnoc"~xǖq& ݌wޕ z_%8sXX ]r7DB?fur*lPdN9-r*ə(ChR2YDDR Q Sp(vR Ka a™NP. Y"l|s,w0.¦؁t6ڀ͞}(*҉։2M1.dB>. upj}:3LB D 0XBЭBsEG81Xly[(Cɣ{ !s_QJ2;7v*gRu􋜳l %k*< "hBth<;V#G/ο]Z> gOs�JX6yGA{޽OC[_v>ަ|<)ϯ ^cq BI :$x|_?WRVurTsL˛2tah6ΰYȓ97p*U5/`>GЌDǁłkrP`8 y$T>|j!V;ǦUhY& ڝA5O\[zSgUظWJ5Ո͡1ecv#7z!uQi4:0H-y:0O|[[XUQJs1>m42N9md+U2* Q[ >_!跑 @qzẃg7DFːl!OovnϾyzuN.i~-}y3t1:r(,>B[gaa_!tCUiƎj>]j;53h.Ul+l叡U?0 /qRroO<&+ .ZRY.(qA.>)J! D_%[o.MԸJUJyP(O6Td]s,*̚xB%4$_D^$9yugi$Ҝ(#%4[J<]'"Wqӭuz}0,XFfZ$$}]&wxOf/6^Z`oYq* Dk1>J6q,OgH&9I&u%z:"x]q-#NBD 8LRfk9w'SZLwqWAÞ\l;k+[1tťjmmLS%GmkFyuGr y4`m`oZ1U#ּNNxl&KY #,P,O4)c=I1&/$E4Cš+:XP5hi-E@k4I ܆;z1M^tn$Y[dNKFRݍR &3LqH(HcBM 8TqhSQHT{?JQ1ö?d^-V]kjKղ>]/͏kLʐGkXMrLa 4Is\JguI r-EqYI R:>pBճ`8)*h>bfH 8Ա0#jp B(&_i#p]R^J~*6>G$dv{}a} U22>q4QȘ(f" >/Y,s/IypEX9P^cY7, R\fW5L).=kssVsSQX2w.B9 yE 1gobѽ q:SFR,2Mxec7 MMU;a|v-n"D.FbĤ_LWp|VUK >~|Ϳ2cYEu݋B0V~n mB_wˡ:f yxyK:f(cyA M+9IAFBmP_*lK* KåXY} ޸QJ! sjTrBPU0Lnf{6Wd7GæEȄtLB`?1Gs6T frMbV Aт Cuih=٪sQ1":G[.-̯Ŭ#R.\CbMݞE#q@A\ss)MRˆr0V:$‘0^>]Q9H9dgF03m>3R9gPηlo KhNz}&~Cۦ-nW+H;۬l(g d3kV5Ha5oVQUTiYlȞ~Ou*&vat);`6RƘ+ a̕1gR:*p;h#.D"@t 5$f _;BDkXh[}!Y:i qin4C[4 4'sgg$wmH! pJ(kQi*(uQdx@wݍBGR&0VSX㽊On12o^C I5k&D[Y4@ vZuEղ#gޞ}02>̗?޼JۙvVTTfCZ3߭i;^hx}Exrsudw|2/[' 7˂YOi負oҦEe>)y캄)v4貎jƕCBQ߰ LjD'Cay.>cڦ>!nХ3+ҏ."K $>Z]G PIuKXihK1ws8Q(x 9&brSrW|#d-X5GM}an&_ =ôl%B*ꚼթᥬ=xVs,~HW3[9 OӅ\a$X۹Xow&Л>im2/yӷKy ƭvܔ VnnE3)91qɸ8 λJ}9^[v>t[hh"h!Ԓp]&ULer; eb1bV8:ٮMfYݗLi*kiƄ[jG ؑ4<*|(z=gfP?]5Bky:=lzay>c)'B?.A[!2DSwi&8@ƒ*-nvuLe~ܶt{@H)*uj)T\\ q&,@7%*HrX?=)gin!,̮;Jk{>{u,>;Q|^}ik0 j-N:ӧ|{lQUqRr'w(~h\|d5\bfF[ \'/Ery&MD σIi|uyryo%H CPϞs8.x+lciƌ?`g;/782:/>x VpΥۏsjVNoՁa;;TYCz;NrMdk6\gjbQqB myQ\[ ^b99+ ?d^4i}FwI*qtrw2䘪sZ9A?,7<&OVNc2 UɓTNq ڢV0FOTQau :a9m/ wVcbؘtnv1N ͖YݍP833D]=z)[ Q|xx.?g~W|-b_ܯO"^]+Z1ۑy J2@d$XբO썮1!A gR 9)mXTR@18tqfv FYsj;Mb1.ynsӔۦg'cv3e~vQg͔j d;ͅѤ4j\s,=ʜ-2p]6kdw8ʵb߉ '6xe{x{b8R$Mf_| Q:qeZs_`FyhQiw~ʙSrʙWSN|*84 F' -&8[7þhY`x C}H#|jA#}0`]V `Z~\v҆[sj>L,!1!4ZUp5 RJÔ ;'z6NqYbf@活0 2 i}8VneGq;&%'u3 9o_7B ENxt?:~j [ٽQVrŞ \1iih:#n*7~m:?kA:݆n5)5?hp=G62ϰ@GTPN~'DJdqHD![.}ۇ}5+}8]PPZhn第g_(ިz%k}+ujƋה~꿉jy?ӹ ^rMjZQLCz}W`}I7xuP:_|8wo_~5} sзWeNG۷at}E\Gaܿ\rˏhKLICa͗_)"""+-DBDADDi]0gI,ƹ[  F an} W/1һbDb D7sq N"%>1TDm%cJZZkx/L 7)Ԁ2ac b3%P 6HZhC6FHxRcr-P~ᨾʹYqb104a EΛ`^RHL {m>c8W  \T:SE wǬSʽYA,q/(Vl=ktA0$D!F@qV2K/ppBp:l)x"ZFcO4ϿDRBXsIN%-Q R FN/ńUn 9neX59# $Qv.Gq$Li)-ia_sZ89B4 V L P D` bf `)bpkuY&լFA xzSF B B@Ֆqa]s4TNǨ<2HҸI,B sO:޼AQe"zRh]x'rؠ(n!9Z01^Ga%5`"  Z XaTP„qxZ`1c#<2VP-U)fANfJv@Zq8fx ƢQ+f(NN2TLuz.Ussk3ƅf_yX1|ӞAl8X:ΙZ(H#A8sKQ6Fj,VAZU(-hK7Vഖ2pӞ} 0ҰcD*!ʊ ,OAAJy ,5rzʂF@niT<;uhmNsHW~ ķsSa\9q6h`p-ױ'RIv0X2ӔiM$ SHTdɠf8Jm5A RnisAZ7e[*`xU~`keuΉG*SS)쮛kys;Y1ϼn?̷o?{Yɍپ}TBr6&X ~>e?ҋ'Ъ44˪6 ~˟.w?᧋Y"_8ڏWjzW/O^_ԫYW/|7Vmۋ|O##r,ֽlswuOSnu?MS~Ԫ;8ġrVsRu:CO^y^s1m9 hoK%,Jd*+x$?%9O]RZ>RW?,`u'[ܒ6#rMER`U'Svt\|17=0iʗwXL%擿n0F"Xׯ6ݞB"!$ӢgŚ4Of*QYY9S`qi,a <IZEtQ:#dZz ̕j#mJU?ԧu3;_`eTP2D8J ǂaB ئ %Ai+Z$,Ci1J;+ rr"r֌5̰Yݎ&ktrpGb ~20A$7cy<u eiPDH.b\l AhBn0 z m>G%^Q)͈3"a 3~˱OrHJ2}`cJd>ZJTC5 yf{T0ơςT02Z y胓do>C\ЧX1 UKx4 S2ރSzK.V0WX"ujcI\c!Ü4,PaSJT#m2%cA`qm }4*4!JI(Mq<ΰb>,ՕIP')!+4ɷ dR#{ԞtNMɃnm=<\𴘉.Xksnt`ZI&01ǴuLR8b)gk:\! v4ޓR:qoC-6WfgmûQ^)[g9rdz)ˮCPo+ ˿)Km(EP܂jUƂ Z# 2j|%l ug$e2de)`cҲ="zKM+$')-RBֲGHnYsEuZy,McG&Q[S0D:,%Kx5J0K,F{ԛ4![?,L( 4)'`5<0[^c ?H4͆iDM-rmc]B_z{Ρ3b\_ݢ@[$ghF-NҢPPȈAx8=U1/Ƅ0v oŖmWxmՍ[ǒypMl0ʵ`;tKۛ4g7" K8Fc!GTҭ ڝsAӋ*MnE&] (]NFe:a$p)[6|yGet 0:d~vҍ6-5*ps-@aʫۊƲ}~veBq*|#(\[?s֍ΖHbV˺;]v#960۽ҡݤ?JpHZpzf!1I&4P-L!?]3O1EPf|`qSz m4rhYs[)XҬZf Dx]Z/*E#U^d4jR|sU) xEj*(c&S-&[OЄ2K-Cv<'RW],Uǀ"k9X|3vQuQ͈CsΩ3]`$NQmc&`*RF})@l13^\VIͲ}i(RZYi*ݖSARp8;v2Lv^Yّhel`9q:dLQFf<(Dj…4fܜNb%`"jlaY`=(c»E&&ЭVWޝ9`[7v,[/7l%]w'wN"Ʋ ߢ·?+)^*z ha % UK/zsVu35[d9rY0_E٭lXaZfSi7mܙ~x_w2}^ua2?-ʽ >L#+|[_lދo$n1&R~B1Q{'BD{1B>8M=T%P3Ƽ/;gb|N#N% Z%hT]<{K ;vrB |.lIa"[|] 8{#h5 d+kRN7d{'Lדvr(ܛ&l*4x~x`07ДmisO3Bӷ+@G f5O(/3 pn4+O2FК7)tEV+dj0l(~˧؂Umi_fbawPt< q%;5Nt`KFY?_eC׶f!Nzfth-p1Trͩ%M)ZW_7'ΦJ.$ݸA}9j\H˃o&h㘜RTֲ:Y=T* ț?ƓĈ͟4srG P~d}[FIzyN4G?C[y%us0Vѿry 7Q+[?ԓ e$郍$}6!Ix F"ayIUUXd;fֈ0"P+@@h $Px'_،?TLmڻ#1}mI݌M9Y9bo-f}(^0NFkLN֔$}nLgb8]tjvQ SRxV~<͞?6c@ss&f 0'p_`DFRko K#!SfyK$X}$^'*ðd~_~j8:_ؗN ezC*[C';kTžc~2xJbҖS(IQQgw!' V1#WVpv%yw# TiyTUеS? )5]Y"Z#< A;u1B $ 'TIl@zZ-dX+q9+6+]2E=lv,s@gTwalbR>;bG\I0A(38Qh,(!%Zh@mv!DqA0<bCg"WQ! MH"*do$K;{F_:oLUUr`Z<\ɳ'v˯ˏ`+Z]$y[6 >|u?OQ=/'87 Ǜ-gdc؟<fphr16nvif+f*0=d9sqHPRpX6O). oYʂDp>ip9ѓ7@Ġ>N3M mA-9nkWh'uFQvr\!] 9ṞBy:i͌IIWw?oj`ٻGn$W}P؇p3k@2UQIe}x0}ZJLTFʌ2"HFlL3kE^{;YtG!aw<ՖE217w_\H YmHӇ(w޾XS$jz3|R8gr)?U|5o~fny;Veٻ9Wgu7:n:??!n!n:Ӈ`~.0w7pY )bW7Nn1ѹDZF%.g\q"wаQlѢ:oI+m-"02ʁ>99Z14&q)kCEIy*ֳHDv9J^6-Ba.1qT !x Sf3kӞM%YfcB"1b1ki4mOyNo*1*~R/Ef=4&e3ޟAٌ @'*Soذ,QBhPd)TGR LG2di| F7tcoR3~C3JߘM˛/guo;i;ٺBrJ1@ N(mhʤb Y#3 0V3U","RYN"( -uYOB LD/[ oD~!2HDv\b0쟆81\z$7k}~PxY+f2 R*ik*=Vutp2)%n SM^/'q'#}(ER qt VVoe/=N%YESbM=LCjEkXC%a P{ym1A^'kPV力q:;☶Ϊ^Jd )ʊRYN*Wu.)@:!m2ʄ?̓I:(P7b~Փy~Li 5U ф#2S|7;8UNcSB5-H?*by׋'CR0CEE|哤s'h'Jq:oJ;JbCڼSyIQL>\CGPCb+QyX+)״|uPNcDQ^7ϊFu༺x~Quw5wh5y1xae_"+k:_ZJ\֝x+D#5.Ȅ,]ֳ~}#P(g@I 2)KdD4)c$q^TTa%Zm()g$k^M`8T#jI.Nm#n}xKJ'*Dq kfݺEyHX5O6"h3^U (iE/!fvžf+Қonu-m^x t(ZQRuV\FM#^AE.IZ3χPX?bX,TK5E$s,j?XYHvٕTU+i<=(@К2k{~?uw6on@\xsP%@[FNM;7~y͇:6[3p><&c?EoTЦYى2 Ϯ}`ᤗnZ"e-mAVM3AD'Di0gI,s+ej5P"EB\/UV%V7Lo߇v˷3ބ͸s~cbx3@w~ԄC#uόEgYe.Fa: eƈL#,"IDQl r }>?eASAv:0 LaOZ/3;G{LziǏ1`I33OXh%`V7^N.dtrV#`H v\z9fgJa噎ŋ%hwvlM}|h .hϸ`@d;ݿ?vWֲ,MlČ0jhMc+dnqOޅM%],BagYM2rae1Ao߿$x{ݷϻ Ƴ[G0Ow]HN _FBՃ]>Oٟwf<[o?g@`whiV_<Ì ŚRs{~Cq=5<(K(pU7{kCY[3Yy4:IS"K%pQ85B+ j Czk(+Ե ekؕQh:~/ Baz6Xyw1,{VO ]5yLgv,$7oJ=Wtd5L :[9Yo"JB + AVblXRx0Hmm8q}|J&1 c&9daKQcOZNc%RrLɈ=TPC9ԩ<^2q*}6V,FtKV-JOL,Ih:Z Ek27 3^Ѥ9%sr&HQ#%uTyo3xsS7y;L* wR]Io(NTۊךPR\aR$*0ʄ1&0R`R5D / cp4xH{&3 R`BU;S@D6?>=Ym2 F4.5\V*,c~l9☇ 2)D!C溜" N[ 3F2i5KL; ox>Za`ƀ n2 2+0PMR3ݹd5fHT[zd Pt zJjCsB-f@Ye E0ᇂ )MB0"o54Ա##1| $ -4'U݄R Ansr+[QV;[m_8u"pCQZNDՍS[7 HjEγ pyַxp6'ҨFS9k&48&j}{ZI UM ލ$A!D&| ckozS c.6W uےa g_mR+J՝.Eq(PtX(kSGdccV$fՙCBRҜ'鞧 J?'qb ? YWaIKVi8pl.0jïc T=;YMT4ޠ7)ǁ*ya$M BW g4dJ@ێtk{E*h,^>HK5_\YYӨe 9GN?ǧLRm' yhRkTCɞg en;YUgjQ0IͼEQ}KgYq)2!/^4"xka~N5_,J拗G9a-ˮj҆NLEPd b"&j} R!'m!+rJߐg'@1\?W*Y=\=G}ӭ.eڶV^aF3dJWm)w:0s^f cAe 1r2P֩Z#jp[Np^I60I%pl  pSf]fBI+2>da AƗqTY˨=୞y!+qFE1*GcknmamH&Kj} 7:*u3RUj4R // |Y4} w.XơYG" ǡg2 <|Ao*k{wy'):sF]lJQ(RjAKgKͰIM&E71xCN**hW 2Y<χ4CVs8Wj_&Cc1L*RTw\%ZIs!q[ %ĂAi]IsNBlPU2Inڇݖd E#s)g>>$~|< 0ޓ]JBh.%%QóyLQӞht_O7S.8/D&a^D;FR^l/6bAL˻cu34Kv|x|('J{qIJшޮvD"Po3Î(<]6?ϋT®@_ (^V [f JCғ-9pc5/2SB~dd6}ZrPEg#T5އq~ 4~uzKݵW\ylw+]R՝zyVvcMD+S*;PEn*x ~_-cX4sDqW*6/#/pFÈ1>#6!YD!$ZC GpdV:T(S{1QPPp&(uNkZfj`4cNYDgR2D:n2)Y%g$rZ-M3ĥ%:0bP`MIyxlA̫vڂf"VXjySͪWDm8լe LZ4`ϓsY%_jjd~l6pJ[- ="0E|)`vb15Pź羞VEu \.3װIgȽ/Қ_(up ,Гv¡>5J9Ɲ)bRmLNr8#lFb6_EEsL[%? ܿ6ӌt0sL#Lÿ# }M >7y}+vp6m0<2ٛ>͂KOX)8/΢Ad!!pHale$ә<k̐O)EFQ>a`'l0MoSf0Qq%D X\=.%`C -#QͼJ l: RN[b+cjj+Ab ]*BsT٢m؅R0Cƪz2&.Ni0s^f} c0z0rH2HKB#ރuq\Kʛ-J=a o i, y,McŘz[7'&y#_}0Lşo,P ɪXO?|uKx2]Xw|8z-~g`&o1;\g!<![ЅLzr_1F8DG.;5ZNjvKbj%0c-rHm^qEY:,2DbdhS eB{)Zl-h ,B/%KSN rD%X[,M`;ZIOc X|eM- #R,j!p[ ˜r\i]efiE tL3 3,LG-^a)F3Z-/M }~$*\y 83 co_r-`̵w¦6/^yO* 0BI] B%֫+]`_.v`()kqXen/cJqƨڵ`˭.c+v4*0  nP s}Â[Z ~9H+Ih8Z9!lQ&FSI Rp Uys0fn ;f*g%舾$he̢SY*ք/Se$lF jErPV@O1h8M2(Rˠ*=l3qdc!2Nj Lxd]ʨV5?;2fPNelb:U@Z+Q Ƨ eP~1f}FS:QQfF]L#/s2r:y~9ű߁}3e?}?9fa@ik7rnr=xCXzP\A0>Iv`t;!z4;!5*O `e-^FBD!DBDQ.j.WȾxEN}r1$_>.|~ -r802q!\jg:Dvx*BhX$ O|#Lϲhu!{? 7 ;i~ Upc0%Xޢ[LȌ;3> FûߦςҰNLSo|':nG4 U @~`Q4lXǓKL 5@dJvE:!TG&[G}%5x{Ng OćtXKi Ԓ kۍjhSX2=[o+&ŋxDJfTO8p^R @RJZ:֞9U î T2R=7_ A%]GO1W U᏾Y`(OThI&a*3YebI8|eN8DK "u!Edy=ncAH;0; %̞B1_*\]{^%P)'zj(p"Ȍ@.]T$ ݭ"fa=B<*)tnNicpo^O wtrE+yҤut:+,09ʓxkѯݔ T@lɻvϝ;9qqp)>1n{3;P7ooG34hQUֵ|!=Z,}ur6k]k?4u߭"u&x-H>Ƴ7hЂ*e_q+N+N+j%1چl^->n]YAIZR[ ?x/1Hcziz{J)@iBx Uйr ȁ2ʉxO7wR$ٰQYhr<>I< CL]DZ4xjR? - NbUe5D:”ij4Hde8i\>GBI`T `4M -db(@F=oqňc)H/7 4R+dƭYG1`J+6r"ݢzepH~XLcI[jKc%dlX|X,E}Ep+!Kl5zn~,`||bE8+O֚^4Nx?=8+ J+kD75֢5NLs'/>8B12s8'/Bhv.|vq{2ZS-$yIɻ0QOd̩ٖ9;<@x.6eiϊ'6n;C?!@He3pǻt!}$Bt֠+(xg- M pYm=lk{>BV~3i{O,Rhi' uTG[W0 +5w<~ِf3@{|`#z8wgJ\+xO'?>L.~k-Z:tx>y{9N ]E0beOo7h 7wg 0u))I͓]օPvco }ÇDzvUqo^e vԢ1+l__ _LFyUa1 +š_0%0hI ؗm7ۚRʮ!Z#J5<49lg<4b%]t^&4tݴ%$!<$mhGf}e !\+[殴 ?0 iڗܓ&j;9.0](VBu,Goȶ ɶLU&B`owԧ qʇgˇVrV;tBp'{R~H<ʯS]f#ƮKZأʶuEe>vc)W-ͯ` F^3>\'-qiK}d쬝bm^Ϫhp-mRfj\$8F 63e%_@$^W8Vμ'ח?M+F9ڱݯUErUD\c?<֔nfJS&|cPt؟}Vg;%[QqR|71yd}%v 6pot$%B[ [AX=]d ;Fc, |l i4yL͟\&S-EǕVx]&|a<dzFǃgDg{?jSJ?t_Mlc%lmM]蕘c@lG`k7wmC5i,h54P5;]!Χ2YL{jzVkVibP:2U\c 00kSCpۨE> .+z1J3Ïa-` )hJ8F(az߂P5UCW(cGҊGRpB0.##x#IP6Tx1ik%$%,"gEfJx@pܛ".T6Sx۠ |F;VZKc-b%+)۱%쪅} Q̰n~HmaX3;eL[ ¾CvJ%Ozx]CvJo@/N6VV9$ Wy1Hm2QFNyoY7kg6kw&-+?#4<%_ǟ^ݻi0 @hvUQ \%ԯXUA)Z}Ozx7\MoI雳3N攝rohg]N/H{f6"$.^ҀJwt5=cw?~2rxZZ cg'($ŬA+zL~72Ru9=roE 2ׅheseMH@ 9 F懨ў48ɝIp;9TNKayBtI6DX,qk:%Wu{ uGk-J"~ ( LT!ҮhclF[qd,2&(;eeqĆe_HDbv:eF) hi2ddaLt[;^nOdXk2|=ֶX!OkT}U&P)e] +9'Enzȁ dV%%D`Ns%~öpg;eүr%0AjNwfۢȆ{QZ4SrLOQߖPt!}$l+[bTubNs'hXH#% BTD!!ViEF2ٽUFYڶ2(+||X'k8C,8/FIM9bKp<ːi[HfS 3u^)͚|̚6PL3<%Rf\=ԣY9r7$7NbJ.D,"KC+r}L&]< F~;2KkFO-лFeE3ZΧ@FWM9F䧳@q-C. \ooHH!Kx0B>%r7_P4`cpSt5>K>/+K)#{W*YVf#ʧ̑'" ! IxH@dgtc&E/XE]2FhS4:B) "Q)јFGoKRh9qHz+g2.웧0Z,@q !J .OHvjJ ?&6j|\RdIl^wY6,dU}!gj{ܶ_ˢ yhҋmvl4cOlO"[~MP7$㑥$&&bh*pKDSS@4{1ꮍh|eI:^Ow/qq?~?c_~#>mOo܊BԆ8Ϟxo~/O4pp|qG '҉W1J>]x >xS*SbgY #qaf.2'{q@WjJkspW^jץe+ z~4綧J&nnk!_JD"sGW iC۪ܕ^氖L7Kau[TDĶ\L r1CDZut=e x;+CW08:h?>ѽi2=Uнʍp>5Dן4'8@L'n <|j?!?-X +<Ah]prSoo~=}QE_f~MGOx.=2yK b8X{']- )h'UI{HZ6S;oj4eBf<3Rǥ~?5BӴg ƣHkp1[/XziBH䟏ƗP%,:CΓVHJ:_], D0fb\Q@*.I- V֞ja6{YF@ՈRt*rJ88RpjB1AjSP&'չ{Yn>fV=g3+Hwp75w攔0߇YzqOa|ϱ-^ au y{0Eؐ_x$sw) ` c;%BLq}TCAqFd4뱣&p~tfŒnW|{>Y'Z{S5zXnd1Ytj5 ?ڥ*HuK %s:a¹`k՘!8TQEr-7DSƗYAtBK*Nf<St\R>(z,7 ,𳄂 ,!)/et8#wςݗOt` kM2l CUͶK%Rw=hʵ0XRQ֙$zw +2o0BPo,կke +Hg}'ј {HwS8hLǣSM?X QYPX!!pJ-HSN.Kw> R!ƪ2SDYBd5@A2h 䙵,tw[-4,Y| Ph ,R7Ȓp-Ɯ5iNR0 EIEE2vmTBi B%.{kDʧ )#Ux# |ܗ 8򢢡! '^ 1Z_;mp{rhڼy} {Y&<楛GoaIQx~vȘT< T=J.?U rS&r=fy2owyBN?}H?mbqb@:hX ,[zi2{!L,&ȳ,#17, }6.^%͆WA']qpF aDOJݽYQ{~~Nnkgv<}kߎger`\nG=4\;;<H%D8G}1Gðn9?C)< -/?qzՔE㰡h|h|HrZCj5H?+eԁ@.F%  aTՊ}s+"tVB6R[(ȸsJ!P0'y]TMKdȩM7sbܑɇ|mu/}c)@UagѢ+k`H)nՇ-D C,'9TE,i(rVhJgMV9mч$nհR{PneR3T/fjE$4-5V)U܍w*s>G1߲o>(ɔ3?([VjO7?[c| ->hMW$->q6J^7iꓬDnQk{VAܶzE[)-?y7Kgnm1:mQǻ1fQvJnݺ!_xmܧ|=Rk8j$5It-|N2VRsZx6+/v3%؇F4KG4-sTKi $ cRR$N9aF3nXT@)9>͑&,G咄Y.-ђ*R< 3Zh^"~TT]\:|;x#{ 5\rF4\. AWp[Bs ,g4" ǁ1t.T㧆a& c8EX*+[Kw'hGWLQI/ LTT\VΰNev$wZخiтgޙ4rz/ R(,:P*ԁj4K( fIWڏ-4:*x  tVBUjZ vHOޮ^y4cAf>멼-KqMn30ґr\7d3eladdHIKa$Rƅ1YcF .pgo6Z.h$H|DH )K\r,Q + 0o&xҸkf8OtaBɨî4 Ӗ׃,Kö Tű PmxbðB'n7B544g 1B(>pPy4#ʍBO2ԃ.Odd67sx' Tύ1>ZV7_Z#[-սU10Kbй6=>]d…1;~\RߌJA6.!?l2'_~F?fwc/b߯l\#[qA3PQ1쀊0᝶l+Gl@tVt6wBGY駭]lY4y0.-myѥqrCZXNc965ח\o7k3|W]dyeVzɥdc㋗/JXˣmT?/*FظD29x_W%v.u&`0"lrˏ9"r޵Lq&NUkݯ8C,>It2¯|3_x<#-eU_&[CI*gEZ&xVA$vr]wһ(v>>6bHG®,\.g\ LGGAD ~ ,( 0*Q} ;'p B$įc\F:"ž֓ȻkO En l<VOzHkH_`~BBH̬ u<%yHKJu˸\HMVXJ*cQev&{AswWdmqcO[E \}1F08{%hN۱<ĭDt'1yL8$ooRv^ZHoUqCx4q3 15vJ5ŖWRWA[Qi:(Eb qޥyc,⺥PEa%}RuFIz(exO:Bz"ڮpaAOTpJ˛wÓxHi?POzQE87xJː{sRvy yd< |o@ݺɪ- A@Wyԫw2R- "],0@! %6* _& aګ HUSM?MtRjo,fjØ!K]@-!=Sɝ j>®BBľ,~B9kB4K,ED(|s"/-6ӚVsJ$mCpJpl=M1i\Y^:Lg0i•6.ʙe$kC3޹t-pÉ%K{1T@E\H^\Z#dFjVB* lFt{ĬZY HצL/8֭k8UZ\D&TeA0fs F ~|k5rEbR!3;ccފصuC.ڦON[[ BNgng%yJہޭw} cm.d9=wf>=qK!kg]- Ηb_5Z>iL"fDm=ωg@I0]vr^a"u^L|Y L&杅{Aç`hVvs *#݆V@J+B$t&BZsQ*IJ1;(5M0Ft|HB:>tPi(E|qYMWաEWǮs8[m:v*fzY|e/sdltf2)4H,_B9ǥ$gJә[?x_@Lѩ%c~>I2<,3Ox}JH dWKrs]&߅/E1S'J2g>'CM+*e|ïg!5aDuU y)3ufd{>1n =v77b1m]5ou'GdA iB,dĴe])a˧5gmyjJ׀RT=`Īq # ZqU 4'1߀U}F͸F灓rY7ucQ):WT Jڭbp5p_WN +h~ dK+N= ! }3GįSrJK".i)Y&2AJ72©m"vlj_3>H F*qi#Am._^,ygZxIЋLsR"]><BV {q= F[ߧW_N`SB{ʿէؐJ9:QJ 6JkDypDޫc?\te;ɽ7jT8˧׉l  %L'BRHL!]20GW=ՀrU*YD-;xS:ڵlLAvrj[ߎwgʚH_a&+.Yc֡<ߝ`Fw0PH<_Vy<]=>O«YkM BI;Z ajב)A6_wC86eG0f5ܘf `%j۠[(gyhNu& Aw%-'hqSQYɨ6Vf(LP.ajSdܠf4+&X/!ZH"rŽ+_I3@#$q.av1W&p;=]nLo#Dib"C͢<(IyH}Eg\rg )'e\j#-8 Q,Mk?vۧ DìdG_}gIx ٔ*Ok^ |.*O)"P߁䔵w͎Ɠ9#~|PM^aOF| NBO_#7Kxa\JQ~8AUgAܲqX Lm7'J4'7XJ97EA(]Šp@Iz"} dU#B9cv.k援w6˚BkG}@Ёu?X[ԾV{7 Xv8`FqUӤݴ\Kt32r+ 'yM`Fdkw\;ӳ|ҊTe.)қR$*\eDFd#i lr_9}GZ-|I+T- ?\T3 8=NT( xR7e.~ 0}wl)+=YAEor]ؘyvBVw^'az= P^xRٌ¦f/Ҍ>VJ爴4z'2VhXiF'Eeb@{,b6 t `X:iKK Y7)bK$eiZxVď/'{do-pr-b0eMϴ^uڃm=b&!Go۰ΊCpcP17 une櫏dbN9JCӮ7lܺUfWG MΨc >eY_#Me ]{X R1Эcx[ #c/~sz}OFnﲪ|Isi?֜6iT BxB)Q{ @!,̾)@K=Ĺ/!\Pu^V.={5AL \H:pݛO[Ր+6RRF-qJ1Ir$H s߻H$3*Jܮ'fJE}S9㗸m%.+j&n֖"3zqJJKǻY+)}{;%NO>I5Aq O .N2$JrgNݎ`<}fo ެţ餳C +;5Q#X}TsJ!kMRܾ帑/C:;S-@b[}Ciͼ^NEN<\: RiSIvIӚmG 1+ ˷&=Ef8i ь8"GRy\ Qюګ5x-?=r S?ܭŝ4 n/Mɝx: Op<#r ?'{+N>I{qm]ʖSHzi}KnNiʺ5w83uT*PR8vC$iV2h*+#Vr&T{"c[;#V f1O7baxlhPSZK@ꯉR-'AYb]S&v^ЂS.ӵsGrL2sL6kQս+Ww <߀}wݶ\%m pݻAl$MÛFl،"VZ rc-C<{ݭ-z.Vx77 Ϩ}׫ }0$rtmlkď4<.S,㝷8CRW@8J$+̅N-`FOq<|=yϓgɼ- dnUy93Ynmr!xPwA|/~FW;d\)!'u6umod cd ^A &r5! 3F $c o 8*K^ >y ơPĈ((e9:'\e |F.ƶ|ОrFx!qF;X)@GfY&ɒQ(tSKZ$u HΓ^$J&b](RZ<šxb)ah#[gvic[: \wT{RV WbxNӆCJVB`R PTI}BU Kytat3#w426Z- ni49k8nZT n& 747&OVqΨ΅bN45h3Jbˉ F6YA_K,7O8_"ƈ Xf[1lRZ 3I>&7&>fh]@43M#6\&PEaQk3MQ[F-=vkWR(mHAT_'Buiu*"NEE5_J-[ԣ ".3+R^_-Y+w:|WwxlfA$},6 |Vʠ@g2а,-RGE Vs v=@k3cҲ+Y+i+?(!#UzS;<{WSu#shs*DNTlZQs@ԔG>itiGK^Z UӸL5O'QMh]*R5+,Mdzě4EnE>^|qcx?^9g{< x'9jȿ%3Er, =fqf@v21<=OxPk{DtD>kO$ԲiS ]+^w(F~~< }9sXR>H}zvÁoGh6MRdJT^KI"Њ Q8BI{9k. %8ݤ)c8A=-i Qgj^F7Ă1g=.QKJgFɩ{ {+';d U.Uͦ/?foo[Ypg}4Ӛ?m$YsLM2l2d`nvGڑ-E3H%Q)^Df,"ꮮ[Wr=T q$?܇b+̍R'd Xk{?Xv*m0"@ $f.p =ƫ 8T4RRr3#MaiEVP#1Lk5!Π|o8x.L_VN!vFR"XaꐸGF=SDZa0TB PD||U٣2+ChJ)ǵEP*e!m @ w;:DR:܀T1L*/#qulܩ a#r"U"Xݣ~Dhl}%A kGnjJqC\9wqS!RT24PN$9ϯ(^QA=낯V u v8aP `Z3nԅ{L`+vpͰu ЄpB,qC8GE_V#]t=KτV0m%K)[}>$Q: dGLq kDqIQPn,H0cyCou!:.0PHe#|&R>xDS()g%r$%6XH($ !Xoڰ1m9э0APg*IHQV;2`LK->%֖dr Ve* TǕ DPӸse_ ?߄\zY缶ls}y.hè p/| DWw17iT$ӹ;`t,&2^coo{a콋NL+9׿& g Ɨu.0ۋ'ʟ~~ɳlI/|v\ _gI~\tgga==VP=q=zB^EOE n6N*\G(c`R^g s®UE'^_|o7n:wΚ"Wznr>|}OH};"Q'vc|}ѿO߽ObŷszrkOǯٿit[Wϗ>LYrk\\nt4/g!^exz8^~0ԥ`0sRؿOMûI0)I"83FO7qڷ)f8@}~fdkNd4ޥ8i3N?_NI2{z$>}@f^Y5m/\;~߸;H ,`V@E. Ѓ=z7(H s.TqR ?AOrEq '*b>yC4;ޝ.kѻtwkʻ"|,j:އ#Bjk3 T$ :ȧq֡bS[))m3[3YXUG"ӚdOx+_ՕBWbYy2KݻZyMȂ}>Gk^74`I\ShiM0~,SH;SH)exD(䀹M]jx:\ !Hy.L,89(1@'vNw(T0-SNbun24]Fhҫ^!*S 27yzZخ]ߙN@ bX> &%Ȟ %"-A3} 7.?yW<8nzlʣ2ø`Wpq~3h+C6rg'c~GI,RC2eɗ3*y6̇X3{i졗odsۘ^iύʹupf6:ڍ̿u^擱sTj(ejNwbQjt[ڭXk4(B: LM\ZM6=04O_}Z{s4b19GQpv֢ N˾&հ~ڷDUƤIcCT8 %v~PJJk'?o;SבcΙ1"V^ŷ *Z}TuB oxG!W1T*H]Vb .xAV_XxqV{B@o}uiPHAsr@:-M&5"s uZi/Wnh-CߛCnd-^~q׸5L| e̜!t-GZ.6X_o3p/u~K3.Usa[uFB+R#%DUgqbɚ=תGQpve S 3}/Qc&4V]jIsio;|T)|@u\cZRLy;Rn-̽m.Ńśf,PW0pd_e0 qڻ|!x$ҪٛJ]J40+tlƏ#rּ\9kA@1]:]s]VA)[ 2D$-6$ Ҵ"U,32d(Мa-v-5}6e; k.r&l`A#5n?t4Y!o+U\hrk:P[aEGw~ّZX/4nLv(q-;[=/:Kh?AIZCqk<|@Fi reMc1EY#nuIwlPbҲ#^] nhg'֞ҪIp(^zspC/H۝gwVAETl9,ð{W`vQ4b2 »3u)uT];# D`t(uw)FAj-5&gg)2?y L壻Ax'|~$dh`AEvߟnB a*)KSy}q˚\ Q!c$ja$B̭th1PT)2ʡ1 Rℌ禃}9~<OLĜ2wgaU™kf(b#%wܟ_9"N)7#̀Ȉ}+~E_ߣg!swp%|BoDY"-R :aEv:]hS<.(Srj6uq*U9)1eKz>V6ϭY(FڀH!e6/3j|*#}Cg;jfc)xgʏ4?xEB΂I K"vw!-}رEzcRaQ/Bܵ<PEx-/@$?wʝşnx\i1ZuvpNd:S=rPMЗ,nL<"NDzX1%C3^vxFYy#8 .HDܹB+vI(JMdv^,80ō9x.5V|g;Gz R6fy@B;"CGuŗ5WXeQN_qq@N6AEQ%ׄ#ʨΠbycӏnC{4Н;y>-F*!ldQНYF NMSLwtkzmל#F?A=%(G;~}y~%K Yﮣea͑4õV.w}{{qa8ao{ڋ|r{n dϸ׽*>9^=c5{Ps9tCL e!kodwd<;TZ$_&omg _hj]). w8Rh NvLq=.Ωlg2'+8]gb&(w/.N1dM72׷iraڳ\,=+j-%)Рn PDyJFf5a28 L'+>?Nzo?Ź~ӎNzvz3qਜ7|Q"y;igozK~bƝo- {xjvmf3.6WV1T퓗sc{]viȹ킧8ZL}mB}= v-L1x4DtY+*x-֛qek.7m~)E5ZwE;KC@҂5U64;Ń]TDWDQtݦNBpT-ɄB2*j$xYTȐLEkK)^ʘ/ʹ5Zةҩ|L7Z 'k@3][2[%YI.fDr}@!Rg9B5[MA2{L4{L4(1UAL# a}S;Qⷀ3$5Qn*h!1ThX#rZt "@yiИyzX6â ^J0;_MmscEsoonp*ͦ[鞥`eM]!jR#Fˈ+rZ!1r$ ߓ=i%a19I 6p?~qU0&{V7gvGtFf帾$?#Olߡć NMAFo׷K̂^)+ _z?x2 ۛi?%_W΅L6\9V?݌F|&p mo.O)1T(g'K?Vn5R.,)7yJKB?_L߬ixi7g&K 7A\>wؚlN}.uѢRs}_+u "H*|j%eQ*I~ ˺<Z\ҤxD!1B2bȕxȃb9׎Cü9Qz%l=`(avǝz+1屪e ;j:K@JJ"ݤhm,0u@rZP&ρV9(cb&> 1Gg:9]!uQ7]{1eO`y5gkfv_5+`͌a8o׋Y/\ʽ%3LwH_6#.[.bV|fB x,<̆RęǗן@c Ik>'Kយ)5r:/&gu~(-՞T{n K5g18ERs4TB^i-r5 LWRWoWArIH|tmC<>eAhVTT#KZJZ]F{*(k[{a¦"lo(&]TLt2bJfODv59˫Y^MjrV^MIv^+ *pgRm@eb0Ü L  @Օ5Qȇ(HPN`BݿD󼞵q8(ˆWC?Y)"eKx|3/ٝ$rzfE"US>=fF %N, 4|Xl)5N\P=MiW'Z:& y$r&("RFȄϗcpRCz,םlho2h`4sBYF N(o-qVjn9 Ҹ a SpXcT6*X Q7KU'5"`h*(-m%#ޟpX(#8rlbFeEY@hʠYI1H4l]mA.a5 $]Nz_;IfGs}s']$%刣 )vzt 5RNsEjhC³ J:Idϙ?h\M]]TUܢM5lH(1戁(=2Zp{•`j7VTlJsj/Zr̀U5U gOkTsP*=\ጌV(WzUÝN䫃#9pG@7qJGMCszo@F'hJgl׀RRBG }qew*s;+ͷ )m-^$8z,~cI -b>YhEп5b K˹{ƕҁg..K:A!d* 4t5ShxJh)T{}uXt,ə`뽭5-r"/_=H LsL 5JIqb ۖd}JWufUoE1F⒴(; y՝w^ۥZ߃QT|J3IT{+q&:ˢ5)됋{'䮡Q-'X|sypeAmt@۔QE%(uY:Pv#Cs\kĪFTu%-Xtsix =Z`d%7ʙWQ D7fHށ8~NU۟|}[@Fh]:@a'QLjt[Jwz2S) Zz#)|*$neyU26Y)HNP PQ/B=bv򭹖ʅB?oM46Dh!mBmFtHH:JITi QmVkcDրE-xPX22O/CW 92ʼnd:M|BY%'#Eu`Fo;hH".W:GAeٔIg;-S/2g;'yk g}J$ry5t;Itoh_[TB̅sTxν1 R YQe B}`{x/`jW9{XAO2(SmJYmq5 Ҭb9Dsn.l KS@w7{ 郺J}|aD#aӠ;Rz\ Q7awz uJzѷSp5WJL(//;\+5R.9EIm蟸{4^@ {r[v\R#6.lHmM#tD{1f%' 40+Kc;y9d.z%[7H.IfLwߝLX#ldIwG?(n>g 6r`؉<'e]rn &'OI>rwQ -Uswɨn݋lyYLdF2A>(ұVB+KuyԐvŮv L({E;R@o0+YMQTiqEJ;]lc'XQ3&8'KtF .iw P∺~TX`_+6oYpnb30eUR Ÿ!R(QX/cl.*Op0jAyh`4Ol*5.d'|n$V"4ڂi1˵pn~#BHM=5/#}ΐ6m1T펧PEP3IҭkTѨm {H*1ѨkU[fmk0wډTCIbaW %E|=^DݲLK `^ F&ޅ3j|]5 Jg>Xvo 34n!|Sm"ƭrkDڼ \ݝ-Ɉ GBZebbQ9KNKB=*Ovϵ ]Oxp?NLf}#I0* 2v}VgOqz#4i䫇Z~ϣ=4tf**EfZ`Dqh&`e-y K{״+Āt ľ5UPBXYUPˌ  $0HJ!b\Y MS_[xҦ\ KS0"ÿ́ b6j|7B5/U & /N5RuSYP~Šj6qpTSY+w7=o+4R!]M[A\i N:*tD=V!B{awiN}łr 6г4v= *GoE"\-krà|i5Sg[nP9~5q/Gխu..Wm݌^e;njLtJ]=]2*W$^עXPJBOv,He7r]'fwT#ÉJ֮Qɑ 6 ^.3)3>aV#1bW C)S]d>b'J2Oߓr3qRGG;.2b2℘6/d],wk6T7t<;#T}֫&TBDK߮*[XT.7{,7/bjZ2`zj?8hKygń2n>Ik|3*5 xoowQT_ytdqŵQzgǻk>.pcGMiIfC{M]<)O+Wj,֒k^K |ǂr77O`2 גr끑ٴո+Z/b]ּ5؛#y-BN/gSZNK\T>$!o\Ddt~&TKI==CRzPYhgs(IUg{Fx){=#q )/#vD_E脾v됧IԔδ[x OCB޸ɔXnF=1v;[$n [ y""SJȞ6YdXpk{cKj 4ANO+oa4[3Q>-mJL'Uq0x=ڥN(ЉT3B`d:-Fdh 43tARFTn3#.q6%LjEʐk"3^ LCĚA EX/v6&d,SɄdN,/ aJ";dQ(U8ӯ :wC:?u_矝X>sǻO~8w3*x|ɓgTR-6 Js*;BΝDqW;OK;+Х\^^6/ߗ4M65h5o 'Иw/+C_2Ƃ{UW/ڒ[-kN˚^՜z{MT)j&Kj&K;~2b5Yuݡ:R{g R/ǫwe7JU-m ~$eiktgD풅;>I-vyt9TVO%E:NeYGq1"9jUj;[Lǃ'Y:t\MA yP iiS.q^W %Dgݞ3fu 8Rn gsTns1#a,Jce%™I3)(f2*[\Yn*w^ZFHg$Ss1,L(KSL)Fdb"Bd@K饪WuFZ-QnI .(ȑ`(9-"UϦFLj4-8 t(ަ2Vr.# Yi8BYFx ƄDLT]դdo\/$`D (8d1ek9S-:D*ǃ{?@Ďiu>[9vZx;Xd"}A ȴ8eE4Gzivek|%UZ`\uIvf"`62M_g\qpQTIxwos6J>M#JNc;o`HO^eyՋͧ0OƶM9]5qmjo9+..0(D`&grMyeZdY]Hd*P)l Ђݼs[j9 ɡv(dD+O02g*آڇc=ԩ*كR?t8}<J2Q<5Cx_~Y0p <`a^9sQ`@ m]1>lg(2&KcI_X"L&LYT QR/ds}р 9P@uuTJJY*د 1,!P],XA-6(S l g6y _p(K퉖kWP>H<D-' kqb*-+r!ef ܂)CWBR+鸫I;q&Zσb|Hu~ls:M$yφe#4,$Df`I, Tc2$+xlk2fb{mMv|ed^zx5g]/|+ow`ïKDc@}6iOlqud=—s6Wx}”K[70I) Yw ^1O˅lӾPW:]Q{+O^9cwa:~Lf80c8lw8 @R;%?^^`(7=qG[`{SJs V!J~51u"WdZtY-6KTwJ_9wSۃ E+ y>gɃ|y=RsM~=xX+B`]ٹ`C+v5ȮRG5?9 g}Q47.1ᒰ@Pn~1;t>\lpu=P>YiUPW|gj瞩;):rpk͜1tuXsI殤LOU)6*KvahSgZ#Dј,BeҔKpJsLNY.)A,ż죯(IZq{λVDs^ Ѕ2/zZ6[^{m`>=/3J pRkJ# wB _n̈P.QFK `rcHfQ9RHLf^7VE{F9nrv˾fDיd+?hS:&w76ߞt[ڝGu@p:+::3 z-Qhz Ud/N@hQYOda~t{ |[@)2pb "ɌPI)n%pf'|)3)]w&+O۩WgO'ۥEmKs>9ia–$4aHe l"I14Kjޱ*RbkV~lAROI<"/?~lx-(I#WEƴr; ĸ ܏(T;~UȨ4ܳwe_ק01ܥJ+8}iVŊ٫ o!H+%VDNFL hDvCL UH{8B^g.ip Zt]\GgɆһ-uYkd*)纮IYrM%/zn}q)i&Uӄ C*DŽgD3A\307,Zb5u.n=-9<+aQR[m`nWzXu=YR Tpi&A,FsD4g.yGEj!豘7Ud2 8`T׫! ]ڠ.@Y\f 1t5!DFа;*LC-lvCdw )"mm=g!YA)B)z .J>F+_h\kB]71jVakkF)>mEW3ZR uOuڧanY=CFh\_"ÍǯD h[OA[CIU=U)0*3J$0{%ޖ9Tak~OӀQGa"rw/;sB͜X`RllՓ<}h!3=J.no{kV+aQQ9 'fƔ%FVBS\ @AR u @s;?\~ 5lhbbz,3cT VI˯_1F$^)ư⑄0'p9Lj $ ޴^'@B *S%`9'=XP}IxITV"Y 5sL0Y:uwx%p!a9uw/Łs~#%zb ZOZSkPK Mxj\ީX=(Je)ڹ<"LPKl爴j*"=DP"nY~@t2L x;$|?GHGsL!T2]7: BqV JEz\(r(\ýh-Cz-SW+g3k ӬEȽ@r=O9MB%KSu&ڂqc0oam`3|3|w6X$p 1IgY1|*˺D^"_+'?zdo܏G\>շQ~|plZf~y1$zJ<sٔ apqS/\HT ,^i'!ivh"i.{v:zh8D;A,"6y![3Cl ;ȏo!v} - Nį֬r׶U:joXO0DX$Ljz<4;ZD-GNiuhOQ \C$,ȟn딪W]8:7o?}'Zݎ~װkKzѲf ŻŁ@/\ulS3ǓW8~-1 (rsL%g_)3--濉 (k1d5V(b1\+6jcjwFکEե,IbV8U [;DP,QpX.:ݻЮFH Z鞪pA {$JewjI\#(QM^ыc1-z8mjޱ8bxqS<\߉?O֬kc@7<A0>1DK(zMnȲ׈Pg T#l1w [=)դ'JMDMX*ǻ{qfܛ bo.j֭^ά5Nx p%$'#GA5h(Ͻ=hʹִy9ߝ8?IqyaM?1fO6VBX x*/AH΍b;8p@;?<|ܹ$ӯ maӖŸMnM"p+$, Dh)$ +ܵ@Q:$I%щRAxT-ϓ1 0fIb"U+~*rFS+HZ%AsےN#9Qjر;ăduTVZFX]侮ޑ C2 `5o['[ b:҉rVA U:Kk@W<9Zd K2SUpfd7j> t߷ .(Y Vu81b֘_2[ㆧd\4 /.3Nf:ӚL"# };֐<#}F"5lE;"f+,s ǫG<9MQ41,Ku|2VrtĬF"W(z|3ѽ4?E%%PU`Wytݳ5#Ͼ3+_#RB:BuyxYW~]ׅ]q`pA;S95ۦG5/Sp`AM6ĚH2g?`H.!yu"auQpv[w/A[):99U~rxIE[=ޏOWG` 6F{śZȻy/`J `(BN,=ݻ&PjIi$$Z<7м0ҙaB%*]/Ӹ]II𖼺)$^zsI">*`l#DDQ!I!cKn$R~e锣V P.-v8"%IlτW8Io6Y`mJ-%Ӱ3, k9)rɜ:`g}"%V $08wOwcmL1^ZEc¹.:m P Ytl ˅ՌQbr aZP\%#ђTB [SJ}k8fr6/9V߶hB Z x%DK]RI/ -&2Gyn` TD39؊H+9.i;4'T:00@G_U|gd_iYT2ڬR $O=@'g@f:~+auMݘ&j8>SɜI$y񇝚_̔0_:aQr ]M3Y"twt *-^Fe"`(P0 T0pA45.e PGc2ZQ;wyT\ȏ:GV6?X;N4I4}db8߯O5"rK4#w9Z҈0^1""nu_!N*"C<+KĨ㜳82gZ{7_bR^,&Nlr'&6M6B<"ɓd﷚zQel<#>U]]ݬ)tH}:bTNXƌt+\=|X˼lKLlT +Pb$GNk25k(N)q[iT18G ڮp}Q͏"<4]EߙM.5]I2ݩ#5mj#"^Sihl$TY$O-r#Zա[>ܭ. 3aul|r3j?#A{ps#iF096]NGxY1S[W g)#mSYXYF%ijI; +l]BTeZB8#he&)y3 fx>,.$ WoA>r=nJ)_Ͼ寳7WuLEv+2fx5bLQnƿ 3a~uA HIEgr1DŽE(oчb*ݙty` )?cD ^~`E 6a9S08F)3mn-RZ#B42qP5]5\*㨓 GOUX<խ5M\:Uop)'"wؾs2$Қt憈и5;M;Dh%aQo?U䥻kX;Hlo/`p&:_[,{ c[~m7YڪiKik:εtڢ X/YWwb1bIQ&j1Yv2t6!ICE[*sJш"K6 j:%ݓj8FOQ UX8T_g0y0. Suz8JG,'WvGɝ!99?Ü^}s*C*԰RyAiw-L( ٶlxT1w燫"Ev 0)ŸOL\z!RSz 0g?CO&$e(&b@A(_$4F6T)?dh e& f v2hb 3 TRP|G)dII'ș$2Q)&1k<-[LS>+)hK,' Y浅e&rq+ XVesX=X^F/ =Y2I[_}e{I9OJ8c :l/lBGX`^ $6Ƒ[tS0K"۪-" QW bQ,U͒Rss`;ċ, };{cxýw·>-[S&79d< gV!SSV2.У"qP(R>P.h:){Tݱ(ޕK=OUKD]/$9 }u|Ma߆i`Zn1q4 ~xmS*I&uu/)SZbʡE/qE?+!҉D`N8G1w; f%V:cn΍S't.$:D)Ri7z]Ie9X6Gjea՝A8Fԗi n|K0#]O(A)-DE\@RQ ?v,E#"L@ЉM$JQ6-(ahi&d}ㄛ7_SዏF}Gs)%I[|rDu\uo2yնs2OȌRMn[>/ OYYUҥo[cٻ_7N14bJH FFH$EIoe7#]d l8breȁ{ c#ߘE8;#rRo}_cG]KR>FLK]N P~JlQRl1wX#ٳZc ĩfư #JQpV[ Ί$)S]%$ok ]6+B`>Cy[N_jΩIT~[P_uA3|]~1Ȗ,Y#2Q#{z%늾 g;HDAi ƃ>D睫l&hwAi NN^k'"t?)ΪεX(E׊Ԍ[=loL W`KGQC2jo_ N/~%8u@#MTgm:5=(mEu^q.瓇tP0?! ,k5gIȦTsKRKM(TXu۷?t' !Hh-PB~BRz?8=@jF r0As^s$ UP8W"źKb־udr}(bjj Wt6xEHܳjn,X{%Y?h擢I;f#L}X͆1td>qBB_MJ.Q;R.VmY{08L^ZF5]_R$4Q"DzsRk)fL3}fa,7圻m=[,.x8ܿHj՗޳xpݮ_TKyTIkGWB<4~.WSRqm'Me4_38Fkɿ'Wм! y&dSu}rG^B1(c:Ϩ:%\E-LIքp )BԻ:ߣw Š>wμ[xޭ y&dSrIGA }F 8“Jz&,䅛hm"I l/H[L?-fw𕿯7Un_TASIl;1ȤBGMљg<.YU%+) PSJ ثF=|)˳' nQ)ԫ3>]_%4:y\{u:eSG) ;p*H_5ɻt&=z*[BĿm+nR~$h lݖxlVߔ_:]GP2mԯQEc/i!Z_J,Auq1F7 kPA1p5k$)i=MhFa_fJZ^| &VϢRc2Lbiv6Bĉ~} َ K#%@!IT% >ʤnWnr2hL1fR1F'YڙY$󰺝>}u%Ss7ZM.Ԙ`ɇ)#!/>F0+j^&ijT,ל"3 L +#HX!PlUljMo־niD7w$hf?ɼp!*}b,&wS;4ܢ.)]{[0>A4m3tfdcq \%dYeU.I&jm .4MIaBZRZ YLrȂI U͝ՑD%Y$]&&x@NhaajdǨ7jB`4xn`"kjepѺsMלcѢDo[%HZ7K یǏq֡(b2ׯ&3fwmv\i]|K-o$~V?~_1MUs\I5۟篴okz-n?K{rk([,i2垶{1`FDo]V4:Hs#LٙsR 1z+-O\'~v{C:ށϫC=YuSq(Mo/(rp?~d-o??22fB'C#-?\[wY3+?xwDf|; ;?S\XIۣӯ'@Y{ƷZpxZ Chdg jk:a'O"uJ0eC5cӉz%lnXӼҺcנ^MKz*Yz6'uׄ_:ߵ_ۊXaeo.º7B1ڻoreP}A`4!(0^UXuhW+ӾJ˃jNK^V]_1>IKV̱VA•prxB۾8RoAGF-:(9a@[>(=E8,9zބGV &2[zr a l{y;v,YSޚIqlTC2!zD\RQ$eHn%ZLĄ8[DNސ'M[L-Q%4*`erR`%)=g2\W$s"!ںۨq;ĺZĞrZje+ϼUH.pz(+g86(pDb#}ʪɟd\2,b|;3EB5噊|dP ņ嫡N3-$¿cCYCϣ$mΎ&ՈטVjh6rtȱl?_Z?T#tNf?vS+{%3޹'Y{2i6'o3v,ɾw<'1;1ښ|fϧMWu:~dc6Hw`ɊX sy\"H~KU&jQIceEz$UNiHkd,<1sִ(gPWϟz 790x jbG4 hq !}Z8/O/d!/&UXF$5Qi$!਻,.͹OEHXɦj%aMvlrNo &kRsQxӶɹvݵzykd^$/ /ə2i#!(&}N'RmZxG #&$o$)ӌL [IyYa%W$7/_l~J??0%9 =%pΩ7͉%cUy϶go]JP v^UEPRbZ"&4iY%x$^HwwT\|\b*ӋTzGc8d6 Y…} cȻf7v+BSJ7'q!ToO# yбӚ˹jM|cu꜐&L:WWAk^G2T+Sd C2+3f[rHY1Cl1Ӹkў0r[0d6F=J :X¯;ƬL-P> j-C {bݪPU9?ȹx߻'Zץ ʑqZ#D*୫xKNU\䄃`O+tFNPY1`+4l3и'x U= W'`4.ivXL׷MPsJaL&`bJ})ǚ[I&vyUi`jFѦWR hnzGBڀbpu^d2ۺyr0'q8k|n=BjNo9تق`W]t3XΣn c<.VeVv blQ24LT謳>UF[Ip'sI#v .}Pw)ze% l՛pqlZ~E;X.XZ\YXD-Nf/q4+öjrw+Oo#OHoRLmٛS-~ۥ͑*b]acz{ 2˙i`r0'[ cҾ l;R8s)J>y#ϫ{rr95V7[ 3YT}<'S.9 \;+֓0p0_-eu5@*+QUI+_[%V^h!@%3-~15&EU=Y"nP)wPLY|)O*G\}'O]/e~=_(] p]x}7A& %mRwђC\YVp2B10`7;]jcM#g/gq(LOS}D%~Gxu;O:d}}'ͤ/}|ߪۘ%?:岎gY.x,8ߜ_WAhС)DJEЀ6ɂ-FC5ю~q y2zsCrίէQۉNFox$AHߑ 8$/9GccҜB6:'Xh ײE1 4vUY (l^>|ky#%>c,gl匍qh.~&4BZ:$k f3 @s (yDF/Zm'XrnWʩ@zNlʈ(P2{i!Q wGjLcTz>J۳V/i`ϥQ~2Z4\bO4zD䲗׽D3I}dDūO}(o{Uqz(H|%䨭?`Y5`)yJBq%XKitԉ'6jD\dLg8ںVXҶd/$PKdLl}Uss&B덒Koą!A+,71pSx%X`%`u+[,1i2Yc&ȃ6KZcVJڧe@=b+Ęò\Wɴ> :E`t y!'"ْߤч<,ɤ=n ~'6H_p.q:%0dw7q痑&rz8U. _!G(ߌogFwD}ʪɟO;n|swgꊞ.m,{Wk{ knGAg(-Sd m.B\I`޲ &V(ƶqзqy>փşcx_v~ hPB1D2M:*!eϗSFF\O/ 9Aӣ5NO9/7KP>uT 9-:I~atXL+S1J#b:o]x - cÿ9;rk៛#'5_ %Cnң᯳^ *ػFW}IRWu^۟- Iz7!Eȡ8ÙH[Bș.T?}f2}…Wŗ{OO/v4*׷#%!lj3_ZI΍Osoϫٓ$k]͜B;0E1 !E53Ji4C0_)mVkQo'6f:ZՕ2Q$rv&3)P:D&K@Cu!rd4H7$o> 3y A|憂 xPJ!OJǜ$Ioqxhšn mdp!t3e %mڮ5*ٍi+՟w?]\M2\<0܎p}~сWqOv4bxõ4HbJu8ܡ[ьC$T䲏Z^vYSO:FrJpց\ 0XfHZ Ǭ팵撦w 7/BN L񖱼(]ry\:MD *G2)< ڰ H9G @#E1=8<b_Atxt @3"YmU!K4CC";͘ݯ"j%ЇL"ZaO2:atNVT;/Bw.h pΉ «$S\M}cŠ&tKeMّR(l@WtLYX&gz[GքYO[.~|?!)3qzт=ʞ-}ʞ53lGz!HO9AI=NPrcm6D8~bo38ݙ;EI_ }gԌY;C3,]|+7#6HAOHBitHe*+,Tr@+VoK6Žf1Zt:.w-w:tLt:n+sٝS0$8,ƣQ8%, 38NyU&1A N߭(M:,x2jtgM0K8*@IZǢHtֵLs$z ǓʄwaQAXX 1* 7dߦ}:TkP 4kďl筦\uZG+u#94Kw\& pMQ'Q=$\@9=%A.gZ0о2:BT*q)sҒZIFK ut#VVsFz&bND!AH2sPgxЪD6XH䪯ۻzkÉ;΃TA۾^J*})$e@aǮp/#ö.24]J"s]%"ow hGЭTSi)ڗ-giCII]z/PE++(&&gqBPsh3e«dA쐌jPT0[T#, }pJR8$Y] #PB4a.MJܤDgA@PP9T 9eV mm吊jN 5%]6 e IEk!Cj4hI!FB&"eN,*õ\/55h7%UcLkAeIz-їgqH!RCs8q2S%} & |:L pPzW5jIYP4V8RǤsV2*>XQ>-#X9't!;U}XMs]-G,:U3M.{P-r^Hn VYnEtbz>D73;g0O]ֻ? Uz7ȋ>cAy~q ?Z(77z3N'Ǻ. q哧&ctBiVHocWgVfDƤ4^_ I5 /:8B(A]rڭ]zN; t'k]\7dAK4I~7q/N鋾0{y?~t~|sBznXi&OOlfKRVNfm\vtm<3DSOyz̛0f/<2tg%mڗY6x^rϩݛE_8RV+~^ФS8(5iBY;gePo[SV55eʞb; '}md}I/?|gǨa +Y哽cs{IA~*A@Ӓ{Swnݱa_߼}l60KIJGNAz+ޔJoD[Lu6 a%yCH6+XɈ)Vƣy;P[‘ސ%z!W! .g mLrQ|ޔrA86/\u^xYC@&EġщJ{ǽYZ-̟:q)0.~#΁ x@WI:"HEU K_=Aj:Þ3ڛډ P4zd,z>N/s;Fui::UG,@W{ j#kjg<9' δ ۱'1 e{r(he1#ͽ9y  KDI*ykttT*4yNŁ"ZwJ:jb/ڮ>(T8P @'Z+Gg`PFbBP빌ғ@jTKC):>V:]LxdGP5P#HJRwr .xxҢ`9f 9ыϽ~Dd(E2<")A'CKI^jUtiW|oۼd:2qR;e2wFI< fȣC}Ư `+ie9ꒆv(0n0h+ %.5`WnNÕ}\>_y)}k`k./h  Q":NO\zBv'yI@(1^^kWRFoRqM33Z;"XGqUFj$08x|M*N&ޥW8/lQa۷k%w4!;za|s5Y_VsLRqrڐ01zN.b࿍f;;0>1]L*-;!ZdmrHY}0í~PE+B L䆪hTO,IS‚%-Mb_,^]no9jGt, <:=^f)z ܈/0/S-HXks*T:y&p B)[57|띝H4;.6&W~k #*gXD%w4~E%=9\Q׃u[ef֪5rAE!ehct0^+m %<^dwBKA^; ڀ>.]mBZ]<7CK:ºWk)Oè#wb/EFZr@ o#Dc="ɞwb& iM FKd%< ˣ8ߦVjrnt5(rJj(3.4׌JtJ1z朴$zVRkml{2vvE^y`=i7$5O-Z*/%wf[T|F0Nkp(`16EB7 %ksor@bh.<H)(z2l&sƏͦl30\ ')Q 7AvwъIZ.nP"Ҝ+mP Imo^K'1'12Yڧ]y~ňpLͨ;Ghn݁+G3s3 "ȩr]H Y %q鴑8EYwQzk~v/2GbReWkI'\COVj$YNR3֥b`dYW#6-=q=>_:q$tCQ;KP@e]) -F`X #,;'q#bHvLNdn TtS,MytOn,"E^Y3/z3 ;15+S"or )AiaddhDAއweϒEyfxS4VMxI cU>>'Xu)}"UDxeCbaHjQs* i_zn+鏔9]N=O㋞g\;RU\Џ*l EKG$yqW\¹<יZ_Yn={>mJ"%q_Qq@&r`&Io=óȢjmU  HiG2pAoQzsbė4ݎ? r5NNNjxEܡG+qyė=\V9l{q eC c~QZM*M=f]Mz j`{w=Zi8K~W?5ꧦ_fuPaP{[uR$$1W+;&d'tl,kr*Hu (!C2QB(Pª^#h}{guh)~X切>m/(o-Zw6߷ &n>-O!I&Ơamhrc47n~&M1sƧgϴ6OcǡyhNzgzK;DO?(x6Wn#)H10N%[&QM'Imq6Ttv'nfm7mS.lFeVsO1M]~ K\؍i% ~YH@-.sE#!>r(\ vR&b tӛG:Z}txhϯAEKrxfH'Jxu;9yMd^={=nXD*";p@dϑS\H*˅s:md"F0Ug]N%/i[1O`yQO؜mÍb|08(1y۷Hn y#B)Jaci3F{8R^ )ƣ]/*y.k (jO9Z!,;%1?w؀C`} \D0`v@җR%IG_nq[7Y}ڬDŽb*3򋆇h~GCf#!X~֌EIp~Gx$rJJ/ugwӃ xR ( 6, 1FiY.i2RHZ`,x>(i`J ypI1!iՓ=. nDR# M1e=Db1땩)Ii@{M lO-[mWkc?w'sjsʼN¸4oacεqG`%t8N=ȁw+{ cҕ>DhE@H9=F+bl\ھ5F#m(LFN]: - (M;52|Ťg0*Ah7B fOeٍu٦rԶ?_E%"{ʙl F[@-{t㭫7'0Y _R0If} ?@.$-؟$4+IJ!|UrqWRNqOx"El,Gr3EȮOScTLj78*ȍ!+6y*Q1'R*#ڇJ%`!7LB|ѥF UDxe)1RhMݚX omB_l>o޾`yBԓ7 %(\iiXpuYoۀUk}*aud.Wˆ.E}ۇ+=X]}п?vnJ \3_bNJyp$xס3qѷ^ze֟7oC7?6uxW&7O¤ yGz_0ztr7_q_[fԍ|=Ua.=AUVISyXO t,舧㽍׹#ldjt׹F|ff0DsR14_k-JrPCf Z ȟx&X zvhha444F @6 J}#7͞Tjxi4q%/]`Ԍ>/1!;#[y4XOS3ܯC4YEE{% -؄P$m[g>E;޽d%gqkU¦i =(r(?bJD0)D3 ޗ QFc%{yH]xbH۫gwq3xf ȥ; p ]V LNeX'$S%uQHn*u )0u<9dz ϩ&1IɳKmL 9ßubRpF]b`RLY=ie? 5~ɔ٥VJ;f5Em*EHD5)u93ƱZJfN# _Zg9:HBMݞڝ^,1_$KQK»S#@KE 3)mR׌Zi0#/LmQD*r|L!"mǘHS3J2[-ݾjVk~ܘw~*ǿuQ֛rzQ)DԭmFzm>֑/_OάZkcö&\y o7mJ2IAL1qvI7;Yqd1a'`:M|A=(ؖkm,7{l/ lX,6a0xuQKv2 Kl5ToW=9Q 4QT4<hSlFЅqH8.D)%ZXp`{m(Α턜t„xw&# 9s%eH/ͬGy0ig걡ef58*/EZp%bJ$!'M:qt)&^ C\NҹeӎK{{Zkl෭PඦoƔ e&Koo~lH4B͒\.ĢM}s1Xa9E.%)J?[tDxVYTŸQD@͙knj"nzx-GĴ IF?[d{1"6v`xltYo:c4z`l`'lp\W| _{~.("}$)>)iv[pg|vR:'=R8E}zrTl;Iz?/.:MlS#yx? ?ϿAKoC;g;:x+ތQw"y `:g0Srx;rjMed?C+_|joq^gh^>61gn/ņ ]G6tЪ`Ma\bOqw1Svtn..nV.# 7a>}+|w%}sG? Ȅd0!<9r|p#${|r~PMdrp71' 3)[7BJJ5Ds,p;icޞXx+wżu$ućxbw:igSm-\jL>l/y&|6O3byxv\NOLAmK>? |1=N5XmUkrixˋ1:l~L|!\})Ř-U.+Ktd!DlG @aDVAQƋN9̻U4׻ua!DlJɛލInNm%mĵnn]X7N~=)xZ rL6I)x0V=\օ|p=ڦ$$uS+Sŷߏ*\&5%"enˌ3e1sF>挜2g $>T% #F ą ;++mtN )PڍS"0}-V 2>K﵀meBO-[OlJbH8ۨ0//aaU-׀ XpNQ!l+q!{\*)K]޵mbkTc 53 5T{,6oU9q q4.â$$V2%BayzF=.xa6%5BNդ0[zFݴ)[+SMaX{a)Z!& %0Q`S#X< b%` p׈c#V1ȵQ o?w#Dq!FvDǻj׀9q&R:Lh~)uB` K (f&Qm+zZi_g(' GݪwA"u<bq5ĦZ`r;Z֋T)1rɼ)/U*x|[TD[9YbH)٢7hMyX]cH~ uAkLgx|yrw=MOW)R< c,6!XJ a/Lu tk ZCw5殰Prb, jPVeXhkj5}ENXٻ|3NC6aQknUO!JpZ.O$4˯g}Bp)fʯ5B1I݉L}\Q%>$A9&@LlD ov=b,fZH<8nRcBt,0 2Pag_b09"+$NҶ)tJAJpAΨБ[D<[kՂa O;CNe_$f/сxJ^sW,px'@%Ub-8T &ݺ$8c$y5h$x$Q>$0!Rɔ\U#@he>+ R;KA@ LWPA}V7V>}%Vr}u~@E):_:<_ Xhv $Wq_}$ $^w֒!kxk!`1ډ?͔^b/wНrqNs/1U*4)d`2F:c&<)^1M ކȠicK o8mDX\aX%pNԲ Wc/;ml >)HNk{wmP+Vl䁂yx—&@4=: /RR<ลQfn\/'EIXOo穣ŐC5L[9so&"yo`ġ)7s>& 9r+ Рd!lr€ dL7fzH&/#:wd\1cWYbh{D=$;y9sxqbRF~5d?jˆ Pa[dep+ Bf e+M44rZR\ 9?ZURsz2{DPq W-5 7} B@ SB`{=\4.kˆDW(BX$ өQ»TI*}TR-H{FWFtUHRPF`]Q <٭iϦJN"{nB&5]9GU\z+SIDN))*s}B'#2bC/QG x;Qo-L.,䃛hM &  tj(n#n pޭ>S?ӻua!D)L > ^K ΟE ,v4$F[B5uA4kvp n \c\1  6Dl8W+leK7۞6{aSﳕ' 1Bx8_7_z1mO{ ԄL {: T,|:~# P_5 Ȅ<"0Аr)c )x*=&8IdzGH8a 0Kyx+O Ss*8 1&2J M{P|b3!_/zp(IJ2??gpњU4z9Ͻ~l/ZXM>uhKіS[Z`ohR @1@"\ -`5Zs./=|OQY?d$8忭D9歭hX{@j$-Xʯ$B 9xߜ&fŐ"{9cfzO>mΓ \;2 ̌#lIL[w+;+=*ȒGT*Ȓ,o]wUD[381mN񍥭GVXr'zwϮ̷|zp-n|%eY;Uq'Ū5$)j%A](=GuQS `Xo,r@yQA#ðlRn|}/lCxrx׾!N^r"&mJn{xp-nELEKih}_ T.(f z叫= uMmqryXmuPwHS@2y)KM-WЯ3b÷ՃqSCp|zVW[}*d\Z}!《 ܛ (U /Yq\&Xܺ@+Z#}Вa)ɝ 1a(qA%s,%*.bq:,ՠ 1ag % 0NtPdbvmbHx_]Yo#G+^T睑p7|̠gvtvۻ(d̪*Z2ږ[,fE~yDd|6h |ٓKƖNB`ch*)A$ʃh!R&qx Rp`V$-1IJgQ * -8|qI%BX@k,$+) aU;[pCz,V `tŤ읥K+LҺ R B 0t• HU9݂UXOUѧZs XL VѹX . D7uߒ%hU[^`"H40 *%J5ʝ e:SʌҠtJH~Ti4Uԡu}ru:D9S(/%qEs^ 'V_fuLɡ/hsf&&XS J'LDk1 LE8pp'[;"6(>j"h%&[>Z$Ky#_REp;+,02sR > JR r*Mn*RŇ._ܢ}X 7DM }׻i5 JR rLm"S\A݊;nE6aA&i-BxBVA锾w;]ҩVyLևp-)'̳'nN;xݡNe-AhhV\ևpnSZp HQ5\HFk*܊pH_VK'ruG˂@ɢcR@u(|Cn[3 SEZINYyA{{du$2_kvd県:PukSJWZ>+دK-A.W]|#۾PWt0eu@)%/} TM *p}Sh@ޝy-A~5oBꛤOy-paxo_k s5*5Wq.뿷ŒBM>VE}[T6)UUU+ꁥ %`EZQPskcrԋ\@(x$EE/l`[( )E%*`:FVbzeY$F"rDI>، [w9ؤ`5Ь]lYU_,|_.9ǟswA;3e-rsNBՅt'HZ˹[j~te\ۧUʯŲxԶQ٧; _sP䏷Q#d.йR͓V% t+LU7oH$ m´dhMz r+ ɀ$;[uògo)uj~*SF8T:=[t^VpK;Y^-}[BP<86jOf!vn1wG*Kz1n + F%18mh ^@E))G潉>P't"L pO1]K _~}.qy[擅j֥(QZCVATy-]!%TR,f30)Tor|чyΗjwvہ* ԛ_aUn~zw),) j-\a?OH7ap+&GŐ6M+_v\:eX5Z~X.ZQg\)Fcb6+5IX޶zMp.+JXhp)hWMώ&!^u e#,klS(H65%mn*kѕ(1Ftܣ.-ہ>/!T1'۰> `7,N<,N2t\^"YSU^WÄݏ3%,v. =à4&4\tEH=N 6˅IN Z\0l:.Q'4QK)Ɣ婪ʓije&'tQX\P$5WS wA]I>.XĢ+_/>:L)p:xSRK!;ιzz}>z87ι8gRhWRҪjדtUG◫gFTck3uMK[kJyۑal8.N89-컋yJ((CrmUCVGG Na*^bCo*CARy] *EDM,2#*4IHzI:1Nĝ6h)B+*&0fq9ւ(*RPQPJd11Hs \Ueg| N΁$F2=f!:o-M/dAp#AxK'A+v[E-!VU$VzB07$L[%a.?\GsܬEso=}bFzcc辔SZ==+OFǩ9zʫ8tȫOCUPiS"ӘJ9TNI[*yptH0S~|I\V]7* T.Oe?( B5ߠDC*%aJ ѧƞ!ۻ%[xűT}G6p:0U h?[],s3*oAm62:A=jy9&vS4` ab*je@h'PQ pN1y=,2s@hqNk[X)x H͉&1B )i-Sĉ[+aXmԮDMc.5Z ]gFG(C#wD~Xce<ݏ=7k}&Qˡ^UZ!RY ot!}Ljnt}SiUF}%Ӄ29Jtaj.FAROg\T.Hʜt\A O>.poy_O1 Zʽ 5NoqWOV^DҁpRE- .E6Dj!JW t⮜rSܓSmnFZCkWHp-AgkZtws5[/Uq:KKݭ_}W磸Y};WjC\x|V7m+.SԿc׳oX/JtGp__/9AZwjyIܾ٧{8?[9msؕL: *W4=Y 7"܎4!v+s:,䅛hMIRڟIH-d3]F~o(Kp_?.'+w\={sVGKNEJupuj pd>: ,Z}YuâAqP]Nj8SIA=wߎB3},B.oį|{ʿ6 dNsᥝgqUM7͡OP.洹;Dr . d}v?w-fVfj<%f!IB 0"ĝDe}BȘطz zCI}^G;5f~n`OWOqtUW 4?ZD-dk{/`r0}B R$V13ЯG@~d7"=Oqk 5@: -E/*{yϐqkpo6<'Bmn;c-_P ]`^J!w^J%2R(ϊ`K[õ8.dR1/i$AqߧNK=>Q#.V͵.`hqOYu{Xoc]F[ 3BHQ4O2J'2l s*ƈ ^1+#qJIG3ID*ރ\yUD&rV8Ũ|Ps9xhX r\"h0 AO#Z 9!騥32zog"DF~rz|J}CV(n+}謁D~z>LG,~7޲s)*R;߳>?K1Jzl;tRG?]%&mhnusp cDN/- J _8@79zL\#˪@7(Ds-EDsDsGW`}kTᾈ?ZkZYZhmlhFd6"mY ڬci]p(L˗A9g4 5 =ۿz'U *~Y = Yo&Bcr\( T~B+{ZA yߜ?ᅍ,1{\ \f&gZ۸җݑ~Rrud?\\*cB vR[RԐ1̃RpuhCCv\HNRg)JIslF\#r `pʜ4yLv?aI2 )U>DzPO9yk. 1 Dx|)R]2>^\#O%{ }#d Xn,&#C{sJ91:7Yo} YGB&I`ǡRh4̽W u_)(#cPEfhxބNӈ]7ǣڸu$ʠve = i?ub\Ap*f@'ct>G ɲ7> "XB`1z+$GP |c4 -*$ +L hL Wp$pw&#nNMۀSznn1$^x5ω=`@ߛX|ލfaiu"&̡+%1>SJ?>okAa^8Y`-rF gГ vʰ >lP2"[ISԯFv >_AlIpeŽ;-&A~uړ Dk] b8_!~1jO\ ;~8ቹtTڙNT,WО4)3c&EE^#?$B^J!>P|JrF>L8jjQa1;9kPQ<2e@"S*h.!Ԍq+Gz~"ٖ!3*bIs|` Гu" (w,)EW/\_NDŽR NlǯV}3*A󢶏PrL9#yƅՍk c!MSVSLHQyRKs  Avn\L1j8)M؂c 07rZiJ./J h` a 4% H&p6a2Ks)9sr$ 1 !{ f5 07[wtq^Z(}L]vN!?~|+XPbL&A7o~Qz`ۅknϩUkuxuzILF_J~X;nf3kIguZu4|ϻmuG`m|(o?={W  a//XR* \ v.0e^%/ k}UK-/ 8"U*hsyyJ^cyAO^;-$R%ST=A/WU+˾mW c 9'bj6Xd;g^hN=eK@ "%ߺZ:2i: 3U$tg.`Յ gq#3lY6zŧzA~iZpN\{{sOmv8/Mu错}s~d=Q_Դ(ar0K-G(8n=H=^anD-c\H;$}_ݸ'֘XOIYޒ㨴#3k^m5wvᚁOypwcm n Ӻֺ#JMlFd,qV}( $%Dr xF7b$1H [SLH(xbYk5UJyjV RbALgsKkH< %bJRyڤk-UNw%S"$?(AJ;3XOP!wuu nkn5TaWSOۀ9ș$VpBb$9 PBh؀;*Z/%МkVr#ܘ,GdΊ1`wg(hf+f+52Tz[Qoh>oH멚Mb%~7zt~r6+;I;I3#rΤ$E@(39p++2D,ZA<{H%ܒ/|vic) ]䃟{H 8|u WuBP.PBp:՜<?[ㅸ< ^&b\JBBٚj $G+wjq# I>,=~԰)H޺ҏkgx]3Ƙ!v~- yc=nB}QAg 6Bd|Q˺f9S-_3YrGNw* i>|vbkj<d̠SuTcW`~f5]ᓢ>ӻXwoרYyVQ} L8 dy[Q@p N:%U`T.jI.ǹLT PjK\QAkB :VӮc9W^a㝄#LX8 w!t6mN0 x> O'"4}^Qd{<$Ţ-bټe!8X/C`0%p!d&v3\'O GY c*_-J kɈlZ\1CXbT_/ ofU(6Ag8l~b&C,a>Ӊ\"ɸ+k`YcnqƂ-;] P+yEMPKS d3XRÊxn4ɿE6+J?Vz~HŽ`JRmӘOM#x5"{4D hG)M;k>vXkO)m6̔RcSNt)bQacoM|/B^QPso>B+x^݆zF݂*bh~яfMtœeO6/]t{w}]XKʚ[P'덈+O kAE+V5cՒUu@ӺZ북MҝH@_\$=!] ~~O'? Z^siNoa`$̏M$̏McuL/rҗ1w\>_IIqCǜ'|`skI<a`/Nx(+|/ ku +HōХsuNzn&}x+ <+?~^u:/Ep8}ç834ߗ8;%T?ߏFp} pm@f]SM.A@fn?n RVvEvd #'kd!̈́W; >!2N q>֥;apBcA@xf'G1f ~k%{RZ[=\cb DLHXf] 80`OX,}npXߍ+/N ,!G4@f%xReFepY@$6ͨcéfJ5]x#K% wјr ~zN=8424|%<!i#r| Mx4Mv  dS pPlEPrzeT9o

#7@hO#cކٽ|hlX9v=#˺KΜ8FjyݒJOHM`aZ76)ҸAsCw/(:7bYR.ʼn'cQgxգeO8|OBޘ6}Q"h']_Vo V`/& -2Jf%Д6^][5g8etdN-[IG]Y@. L@LWo7㯣Av. Py?xf'D<︋8ұއ5PÕ8.LUK?TmV_ ,Y,- }1r,rVd8tea,%TYM f iE,g$&b- M6RgH8R 9\hS2ëPM1;ݴ? u-2RHk+s)@łSJRaKMJƦ$wRd TsAhouX˕>kTR}@t0͖WƣVu9p$1-~mUZ緯 c̛1ovd͎<|,ۗ\{Bx6XywTcF OyZ?[gl:-քT:$yetbӐx]?Y9eXPü:ǛE0T}ƂBWng ՟; Оꉲ[܄? &Fih3;1k@@'ճ,D)d`E{$䍋h%RucŬ֭*1SeBkGL[n_4ֺ!!o\D)Z|kh{#ؒhNY:ƹo0V' "j Ade8>U XJ82kh`} q~>p}1Dq`wNÖ %^>/UNM^K D9O-2n&Z1- #13.)%chBIDCCd6+g4ZeiHfUZH'svd3St)e]9M\kQ#cqAT4z!J9s3RC ޚLJ$ #k̸d9HJ)LZ͔hIۚ.FМ*% ďQ :1,dM7ZBAVdDFS$/B {{З j&3c^Mc:~;`Tl#[v6&1mhѻݽKOew8~ǏR8~kˁʩ\N۪uvǪ@BzgZVXw8L:.ӪAkzHJGa]\dǛ*Εٌ]g7dMU 7;qir75Pv Y97<[RR,; nY ׶MD\tQ/㫾ZֿIsxtJiHHiH)慆c^#i1#i%ĐGҪ,;uS-[7 L|jv.E!!o\D+ɔĝk֍cى,U1(cݎg9Y/kꐐ7.˔~#%#L>*e$2e0j.KwaK(SDÜ( 4&`6/;mR \';@c ĉ3KGɄ'#2XssءZr,O^e0/.O}o|?ϯ?uHKɽL`3A,~JRFX 93ipYs6'W \d (V ,b E⼰e0[gXX)˵D q=\cb f )ˬ; D)j@lUEp TZCU[ƂpSjKExp T|B0].KՄ/YrM1& W QЅ@ayM[s';q( iL8^jNJkSbYgFw)4רqwő!QQo*2(؃H 쁖Kك%S@n)BU\`P" ;/;nX1;咩A4@ٜ l:L:.߭NyᘣޙoӺ;H0g ?{WF/ ,_,pf M&8OR,_%-YVQĎ]UOXdmubB8VD7cxh__Ox̵UIEKpښ\٫^ցcPg \2B0Zܟŗ64hn倜t ku0,)I۹h2MHKw)lD|J~:ϟj@K5$=/^7ѱE.d\`or(!t~nL5)o.UHy6{tӍ'ӭf]٫]j|_,1կ 4K{;J-,K://+Zg`>bpmO]y[9\5/="+j@fr֢v,֜zE='VR'6`Q㡜4igr}700»%3]~e5UEsȅz7m|* UOUORW~x8F'w'p ?6e!B oSAX+kl(6sP4"쥌#o'<y =`Z Ӎd =M'v.oψ! R*/+C}ONAdM_J{E.eҔ%f˹w?V.,W!"RϷ6iۤo&mk ~1Dh rWR#ht4Bsh50 O`MkG}ѫM-lڮLM}ح׽_YAWIBsl]NZh_Bf1NWz*72PHaO4͈@"z5FgΜ` k(+-TTGH MtOQYה`uM&ک:+>FVs'9K|f3B)!ZH{aGVTi=; ) 9i4BDR2Ϝכ>^T/gEʼnʼn"AOtxO*SjLFG%&1h5p\rR_j9~4u?E2v93i<)7҈wCَgv ѓrUԕ,wFR]D`mgVЦMdS ,#eUr /SYG}*Lh)@'\Pc`BB.4Y6=ipnɚ MJ ݾJz^&A{BtSBKh [hRJ/u*K/;uCtQ-ªn~}{4v~*cB3+m T8b~= ju2ZJ <ɑ*k0N&Rvve dtrF![7NSu^g$%{:cóYB{#9 v@Z pɑ%I\kSOЇuBvFf$'йFT kҞZhZS%!N{̓NL(SZ(TB8 /*U1ʴto & G£t5XB0H"}*SnļR7pC*Ay[c8hܭR4,-x!c3.LSl51R7o+h\gc‚7u/}bpcyfO@yl_}EB#<;ؾB,2i\|,) <+IigJQšcrjv4ޮP]J /5^-dhbXjQQj %{f9;8V磵/~D_L//ڝb/fwafn/ޡ.5mgkZVb+Kc %٩x'9yrBá2;],I>(prTK2h0Uw oy5-@R:n-"(f\iB8b@ `+?8ԁw<|5ˁR&| Ӷdϲϕj/¨)cÆN:r960E4 Y]NZpN}Hg [YD?rjۨ!D•T!D4#Ds") p¡G4=w잊weȍ)ÇawhtAHޣjekke]|AXp:%STx&uh,N<\5E@ `p:ԫ?nJWUMu7C$S̘l8Цy2PG!  <2y4 6QJlooh3ΔI4 F<<(P:z3cqP2EnS6KHMD#F.Imy%ݗgBQv`i(Ğ qo7)n". 2Lmq;Aq1ƤA%nBTy@$AD;hĻ||slG>LJ%/quyiG2]xHɕynt&8(OVt"uC^<ۯn3"xIƽHpE8˂gE&\b&=8p 4A(郘!C  M dܚpdΨ h989T߭ #H`͇nCg@q*˸ph]F ח_-P>ڰQ(rh"?Qqb3NcN?sSmƜd}ИW?6&pYe1)/TQ&`xc1ZOdV ČЎ ?FGq͒<e䣙$H5_x4qKiz(`s(`pL|̡qe24!gDBխ7ݩ֊Uӽݩ6~|z '%e}S̺S-􀔷M饬 0fjN}a|/;m-,_,n|Kj⿯Wiag^Oo]-[h'߯݃'h~ _a ]j2V1e-tpbwY't5fL :H2L@$}G][ΨK5ˤFtk'D "NIB EPVGHнf{P Xr"ojFxH)-*&fJԸM`(eTZ;^Bzhk 4 QFQ^ڠ%pNk$QTL%R-Q\B6=]+U>U}Xp67wMS{ϭJMg0>D3)\8rb3Zc "~]b#ԊGW 1y4'IÔ;K ]VJU =iCR @ ytrJWQ0D6j|' <3YNh DC:4 'e̍|" H /$R9!Wq]SO+4q+i?LBVaP^"zK17Q'["pf%`5 {d]ź皬gr}A,ܚE{W`VI4g pXg08+kY VɯZgrH1IC!MS Ζ`X1qlT6jE"6Xy- _E\l% Iգח7{->TJOXJ8';Jj&:K&#fG;cA&pB)(VxVK0㲶ҏAEJ"0K!VYMdژ2V0Ih&]zebЯ5\բSlSeZ>j0J@byu<L+tXNq$)fVl=B![r:geCqeH ~٠}Ng ?OZ7{$dy47 ꪦYx~cOBhvz0sme'fɬZi~չ;T/37LZRjCXon?w1 0jz Oc5"-'Yɏx.q6 fnO!(&[4䞍9Js ' D DCBt L]>n|Ղ|/r42^seh5gđɆ+omO0 L6\!yja5Rr<`J)4WOOoY7$[+jTCb/S0*hN!?y˺A.:ʃi}vȴW?4ukPiݦАo\ETe.Dӥ@A5w9{} ,^s6u_ݑϊ'\Uƕϭ2 .7#RF]ܤ(1Q oH Q>{EGRH=^\8O]}nl>JymbH>\n>?ɜ^q+F_p'6;")Xټee&Ct큽Yg_i?̀';YvH4Ԕ"Ϝw@v ]s lxUw_Z85$K,V>"< jeb+锱6̲͞lQfbT4[ mQҺdBIB$Sp{{9FȒ&k"B&Or'9\d=ə-QNʬ joYXwo)bab~J,93;%֨SO.B5bh-bmiE *K!^IDrlHTzjALa:cFXr~2z H ><'qO'5feWt ա&d]ݲ{ݬy3QKOGk[R%=:%_\6FRDpzXG8z0w6T6SCd5Tpe9#}Bjf~bL,r.N1I_F ,/c%XFRxeX,h(R=xt0rfr:FE{hvV*= 6P2Hn5&֠St9 3]RS^ cZ,&[h+Xw129(@RFQ }HIC$ԹunsG0u.Sh)O@,jU:q$ ͦ~ pehw V:1\ɘ }5~ťIf(*QىYIqY,|^A??i7(~l=!ٰ ϭ8 X7y,0iPj>x%0{UMTw߃x(89rbF%x%*~܃a@9v|k}Iڢg\ Պ8u'cvQM)䤣V3(\+I !M*O|^6Swyr?Tmռ'b%vV6+ǣ _Þ^DAXm\^K׷_rs^8X;^> ~{iKײ^noC?Hk~S,s.K'^vw+rшYe=^A_-H=M/]ݬ{+ @IGH]m'mw8-EyWkKc`E1R KM8<ZhV@ήoehMT|7_6ZjVj줄R.;e,,iV,=gL&)3@ .0 mbY2?314`}0X`$TY+w1]_Uvٕ{^?=1U|`7J V] 6ץniC7EQ뿞+1]pn^ըI{V}I+PJ+f냎<yҳ4%甹Ovn5o 8rma6ow]&pJ}?՝\ z>\uK$\C%ʥ[SyR=K 5d|˧i{N@Q^i>|bq=zq9i pJr:J,.C,,]TFrxp`k hލzۜ\24<'@7JJ;OX'\ YF2rp2$/p0IJѹr+ňc`n (!Wq>8wuu- uIrCyҕ-LnauR_6jo-m HK nNj~ 9jax"S"[|;+98X'Q%ɴMJ 3uHdd%^M3a8jNۯ5M5,ZџOS¥BɚPO~٤pUt+yel]sݹ?O볏͉ݹx-jO]J<}8&z%cszB2< s نY6 0b˓%wA!תsќKYkknݭ:4 %U󴫚}݇vlDZēd><pFe[n_q 6iHSQS!aw|{Wʣ26Ѿo=".w4mp9S֏QUxj n,9FQD8ʛ /n@6=Piѩ0G)ʀfYY9G;cꃣQTM#K(Y̍x[l}7;s Fܷ@ʢ/eyA TT8Z&m5UW |GcGW`x8)}i$|-yTlF)$)Ps@ST_=z pp&X)$FZ_B3Q"F)vFCЬ@6~>]Ȫ| vYՏVXZxΥBU!oyP$%˞̹ļ%tK3_ ENy g5Īt0Ր`c%w@0BOc IoMRbt|0:G)j1s1|oRp/澜B`F'AlʀQ OҞ%Q4P089\VOO ̧sﴁ'?E&r5`y!Q|r99\>y~Bkr.ni`vp>4^vgR%\nGrгG# Ug ;8wXSm >- a)6v4bgi"Ke,?/ן~vMo?l)9lKӇB 1 RF/k9fʱ8]H@ܳ£uuc \{K&;?:_=ӻ ʾ $!%ԒYK̮IL3,؟jMq9 T !l*Ȅ`0\0g.@YTb1)gјZu$'S A$[KNk6r ϱ2}22qTE0lT}VZ֡JFkTP{9*fT<y/b%2G1T3k lvbD! ʦb-` fCAjOYPaXBL"V;qXrk35S)T@I5Z`b:H|Kd/#r}VۓRtM?~ҳRTf9)66^'JY)8>E&Ւ#].J}'(_\Ri66V]"[)uZ)Up~=@zZ7אZ||Q\8f,.ܟfEά5,,T޽A[y3FoL/:%eДa,\J8yr2E]+IV])Q0×Hæ AO&H 3|ztP ϚLN5nY1BYj4d܉S7u7GkFJndAkπaG0ЇỊqI'@9ϊ:^VFDY{zȑC.'tq, hٷ9)5._iYs-/򋿗kz a|ٟ8|wׯ7mV\BW^woﯚeEZ/:e1I>3/569= >OX}A8oO]#hdp;yKy"B2XI_ck}(wC ~||qAQ o ?@ zwojIJʷ8$ VO2Hܨ>K( dßի6'r``29a|$JMU(%ekɛNgT"]9Drՠ &6N٪UC< E70C*Jv6GgB 誣mpP۸k_Z2RVmĀÖE8uloږ^6=ba!vG Ѿ/OsD:#'(_x0>E6@oԻsu|xۜ_d;w 9@G]8խlVxSGW=p^gKDA7>'cA4ʨ,G N]CG qO }$ST_=_z-|o1 +`FB~ k_̟_S4Xb7t hA5C 2Y%OwXÂ:DN'? WTstyԙZB!\>ұG?Bq~ +/a>.Sɯ#_PzI'gE?x`r==iy#Џ?/; :v v v-7uFs. Avݏ5HVJE/nG k3sjT,ix6zYњÃZHqPv|+J#v5M{ azd!B,aH6%Arle򔙞>O<|t1},w 4q!֗XVƅ\ND.ި_MqTxNmhmPrKBYoθ5ۜVkx#aǘTi-ŢjK\*i^:K*fZ\[&@vh_0%]SEs >DAڐF.@ ajRN[_ֹoC M ͊@rAUf$JP$E`!xr5}vH,Mr1Q_5{ӗ͜՗M1@7܂sԛ:IfxRڤZTR.`4? dB'Xe<Rtȝrhm~t"]=^;Pxpwͧo;gj{㸑_pX̘,V8ߗw /0nv,+Ȼ=#M"]H-vX$Jz8A'aa0<qόcA.3't.G|xbt^=2Na ûay< 3SVGuUtW<\ᴤVŞZh0[`Pfou{[2AE={yQU^][a\Ae1-넛l$kL:hg@MW0Y;tL!^ DߜK{| wkGKbt86y5rP٣s{Lm~FIM腿^ް|FVz>49uGArRm tBcNs@RkUT9J:cFz Siq8[Md7k2#@܌rC^%ʌY.AIm- 9TBK4sȎlV``b.tk0F*Ӟ:I!%-8HlN1S6-8'ArI5O:߇vuhOLR1Ѝ)̈V%zSjaeZc)У'Q9aFuL@NvF1NW/I@>/j噳$-JdqOdyӜj=.=uYKƌ1].Sgh{:GmhY&KMhh<{{%&3OoHi 6TZ^yFH<ӓ΍>Y8 3'P0f(CW 3]rW|uT9Z΀އ>$垶摭XrOpb֝S"{^^.>&rۡ[YTAvO{E͘5bޟG?M#3Yty0h.!'&@b[se' ˕W31|r6u~z_k#twu/;/ Pw.4e@|%dGq\FÍ5:\-KkOB«A<'*{I7saH11deDLx'8TR*=|' g-p~qCı 4"<2Fd!ٚԲ_.ur$@m9-~|_trjr -7ƅ5L7ǹyhVSqXSu #+\8.+Ki{½ټMl޽;(BZ| f`)%f:%Dr P`g;aVh~RPQFanb/AU{KP9`X1-.%Uq9VfUK-"57 Z(< D #TC[M)ԉ>5W26)闏I~Wj 's_B܂W-k@ƚ$, 4 \LiQԒSmQDE˵~-EcQDEe+{I峹'GxfzʡTEF_ FÃTYwv8#abⳖ4֞dRsns]EZ3^Ǜ710k.ZEwAN.|s͛YR׽;Qyb904 Ai]?ǹWoMZ~pӍt䩙FLRJ x$);rŇ˰'<Xй/*h'6-ҖMjxxc,\9h!j%ޚ$Z?=:)GPa'=7ǨF^։<3}N JəYwl}ny+6[y~6M1MMՙ*+[$Oz %U< ]I+daZRUxBXg ahgܻIgzR”IsfZƮt}^j̱2H#Xy;&^FǨ0eW玠XA:DC*]+|Tg4"SjN<lCl߿sַ*7Ś+;ϫe-egY_;ow۳0!x_ >j:LG ?e )rmS\ ww{v~ܞF+]ƔRfޞu#hԞкuA6+0mںuWh!|vSfq[7-guA6.5n;j!|D %K;k~^MR$!L>z/s^3gxJy]f(y{ȋ1ҡ_|p ROElG6C"q!NafN_dV{Ð~z l"6NZSdVaYr`]gF`A;tU,t95gn|!omB#Gu"2c!5Ь>]>\L-% ,AΌobf{סso; bۚ$zwS]V&Dy( mȝ1;邇nxa މ]m'PHMJ.Ạu[7L׺>MԦM1&5|=t8LNmC [G:fvc8ybV{Ȏ Ie \v:쐞`fRH? [n+H2otn$PZC2#︿jƝUKmxԵE3 B\05O^ːJ֬l;S $`bvG3 g_lHlG"$BglhCU>b Q‚q7}ɝ4f/@$1U ђIїW6 po{VI eZUYJ/eWiΆ,VG'2TF):VRO$/mL+ܯ$̒ ɁKED ڀmO%JKI-!CсFki |z{^DQ"Րb PbZj":zE Q[ Fi%Aj^IZ~&~ֈ W Ή\0Eڔ..[ei*ɈPC`{D}|nI<|YBN&Sg=8Sq|v@t~YPNT^J i8dp{iǬ0!DBCҢOJeu7Zφht~zEF4b!"~zFLs6lĤS)k1z!*MXZG"̓txI")lNQrX*wFJȤnX3[9̀ ۇ44щMLRIČa vsjVT҅}+A++(cM݇vAt4:3p]d i}DƎyNq祒 I(=,L! 3ZC1Cn7 +x 6 O̶}) }eؑw4H׬c @{! $5eK8I0ITpmqrEY_$OFLD"xH ^CxRy!yܛŁM&by.0 t 3 iLLVi5^{jIB`\gBZŘ_p7>[K()Y'Mŝ0뮳hu 7ҌDUm2KPBŁnˤA`D= ;]TbAuQp [T F noOo8Mgék1z A4)@;C+H ݆{:P><3:z͗V,d:1"e3EÃjT:)G (4O c{ץs%l5רb]]rwaH$6:1u'邋A޵캤y:f; Cw$MLٕ`ɔgl-06:(ybdwCz;$Gh鋅+ZTRxVbAq)Btd͸Gryx 2W?fl@ŘCNGqJ&>ӿH57&>" "Dy]31?[k)F9估UhJBXYxgp*VZ]2eiyQmj~q5(+4%Jdi 'b[QY(ٻFr+W l$N@ӘIdШ˭,i%3El**R]=t[./!//ܴsA`0.*ZuqE)tV=ցu$DBcrJsuؐ@q,v m$1:lÏǞ%@iXpDɡWўOӴܿbB1}m;6 ͡ Fֹ"F~6FѤ?'>\$T~8_1G?l9Y;'k6bЎQ qD?oCKv<&v(>Q\-}>l6NmQ4|OC<4׽#9꬯b_K8x ܻ.+XHkWX1vxqL􏇢cR!iַޝ.,aHǸ{'*gC$"(/=cvJ\4蠥(&쀣X}DnCoVAX1N<90cHj ax24z8 :a""Spp̰!sŠ!pgKpɡ{p\+Qz:6zgHu2]rLLH9PbijU`YʱeI EjHQjS iQ2R?B< "%d%2Oe)IROJ1 du ΣٽW}TecN )*Ԧ_ c"oKf7`_u;q q\`!R |]޷Ok wî.[1zryo(+P~*..ފ]3\1ׯ{Uwno Wn\t ADDV7PvýBs: ^V;1'4r&odDq-Ov^m(Bʲw{!uvfw.y~MsaOvJB#o%;c CyO'd~};P'q8H r>Kna#غ~͂jg`v~?Qtn=Hⓛ=˭)ZEIfvF#%U(K0$-Otnv491cmL+. D,2`QsY 8O7:8ȩGA`L4V~׃cQdKSsowN/ R}gW`o5SoCCwuWx> ~sx;o2Q_ZxJrY^s@׶F(XMMw6J<}{t Tˤ[:3u%x+}̉FBg|DwFi635rz+="KJ(lUQ9)/ 1icw1c藂sT@1A]6*Q:.1ѥl-z#M'I3 !yL;j藂s{Ts"⚲wOe.(C/`nll7%ŻY ,D v :E;M]`LPQTF!p[I?.NJ &W42|;=&Lhv`>5rDp枚'#l~>[@ua^|6 +ôAOr)]5jCg?Y4Rx z>(>=h)">tF~>Q;8/8t# Ć'2p' ceE @ [ <fFPbP2E){ڢ y_?79,FVOiOV)P:Kýo+!N?}p*]>\d.׿sMݗtlB1lv~?4F[-'Ӱt3RX}v4Pbe{Vn`ٴ,of} jr3NJtoo7_~2K9a ,/d0](ƈ*} FS~gMnqgm?0*l:9p6!/))$ՈVc&q>M8-ƌYӟ[;]\ƫKA:Z)B Y!@RHd4S6eugmT4"NQ(+/4%(20?)!0D`엒`\0\ ,pY8%fZ\WՏÌ)aT_C9?j6SعFC3Ø XHZo>E(Hռ(ԭߖQ!D0L־.ȡ#ufGX1n +gagdKå-3Mf2=Qr[ٮ;bj=MX%|6Ђi%,"[t>1r^=,/MuR\&Xb4kZBeI3I&ʆ˥ We`,p8rc8N7V#m}(}0-&K7P$Qed=ѫɩ5B9@Қ%h!T/RA(õ@8>{U$MB݂ʦϛlYrP[%=Kݜyn]9f:ĵ7>? ZʦI݌ ?ݱy6|zq{*3FIGW!n{v3E3,70ٿcҋo;unZA /ګz߆C`s fы]Z`4_d5tE4燿""]ثkvXW]!So+oWfvp 3F1G>=:'*$ak."\@6cWzrtVَN\}"j$|֋Kv LizZU۲yT* pKq]bl[)JY'1Fq 1"`Sz(xIòMѩ(vʖL|bDP (*qC=.AtozX*鵖]u*K 1.*=ҁ|q{`&)ﳜ]+m#_]"u^h&t]~`qJq4Zˮg^jjW!Eޝv>۝F8Q}͹"VtM2 <`g}<m 4W%tbkM-'wQ=/m}(+5`A ,Q+`~F5@lnԂ<*R@#2L8p5jAW5D?t :7r]u@,4Ńi܅R󤞇mb}0}0)(%;)ǎ1Q@8 N4&YC'=]Ƹc^B$!<,T/:#vkmȲ.rzt<$3AO"Ym I`"%bm$n"osyqAܤdXxf0gSAŘwTh1%KsMRi -, #<9⤙1ue^u{c}vC&a*cZ6%ؗ}ǁe ClOV \2Ee)B2e*5SXi*1iE`>YNރ7)֑ɉ& o9\yy뿻P{ώ OG#FOӴ3a>f148rdv{sM[5DzwuO{'A> i㚧Q*AO(\!)Gt&E1κӵkimwkc{Ś^ïZkx]|^cS/-~Q#-%ᇣj駪GO a`G5zX6Ŧgx#XuN@1*"9"faP2Ȁ7'!Υ?ev=bbVH;d@?𗿝˚0/örͣVSR r B,^k3b"ObUbnC?V(r!XPvhKcၫTՁgln]hMVLi$y[+v-cۮn"{e?rg+GٮjPV5EѨq匵nR1dXN3BF3i}wv׀n]DѨj;{pĹX}-^vY;1\vmwv B>x6bz گrs…ӧk\<ͯfgՎFͳY[Yvn:7/9AURJWf>5Wl~{F)Xҥ2~Y]%m"Y8Af08QV҄)-x!Q!2%eJv+}Ba4c^(I 256iHyZՁm8+RhɄ,_,E!J)L5L!9kl;zY&N.uj Ժ~I@ ÃlXmnޑd;|cg@r+eX0TJy'@X9.k-Bo* (ʣe&שTjSƠLf>6PË#a&0f,6noUb{ͿYVf씭{2!Pl= YtCuP((5%3yx^ N4B1Nȹg" 'RQswK&>CR$aV$BZ+Br`4B 採HJ3#M[Y^DzzQNRwPK!ƐX 1hkA !{Z-m*R)F3J &P2Z/c!N^!8 jU^\h9! aEr U0O)(CwϷv6:ETMbwE:A cY!I4O4:'E^қLJ AXCV̚^d,n M?b x;"m OSǻ< Ώi)!V*i,Aչȹ ](-3ˤĒ,R/*jr )(.#t:>*RB.a죔'!D PE> ~Zeu8JܻtǎD *8*@ Sycv<'hqlCu}]̂lH6_}6A2N EgĄs$I?{9ej2-!})k1E  =#+}/F`BD۶0BRU[띈2qD~YsM>u`DAh1E`\g(͌Qsp70'P6qH& <{6#D9CbOJ0{KW9T7pNO;V'\ zՉ\(ށ BْO.%`À`XMa%Us6Iws1e16vF{9J>7Fq|FXb>#|`knxRu!⣏|Fdg צjK&DrL B>x}R]:A]y٧PB;r#-$i(³fY#)Hh y 0`'Z%x)!+Bܤa,)n &\[huX췛GZver2_fSri|M>~Vc/v7K=K'H!{5sJn?QMfϵfϮ1_ }KAwOIc7`|\wycL 6Z;6/If"Ǎ p7%'3^Cql״ʓƌ1{U>>-#ukO me|n:b2Jp*v< z;ž*+3:T)v߈v<9 wǨSWuumgP?ٻ׆=C,S#;>zp8!dIc ds%e%Ԩ(38ڧ]AK:?1/ds?WԺxKǽzμ8E^hƱ۫x11-+ mka5KA\mx-Fc&*a}j *HM <4(:HQi}jm5~٨G.q&(ުƎn xayXyV=vrႪNU7TAm0AK0L#}pYp[B Fvr7,|< EsބsArJ1/^. kl[o_\ts(y"-JC0ef190䦄Kp~=ɧr^WGf[Kk!yzV=>G+hcLحsc2Ί?\1fefx`/m//! ^Nr>rd>mvB>l߼RI0D9VvN$?nzeA^Izmir)N^syє>KpTrxd+HP?7߁7)R_X|`<iwHZ^z3x2OlqNI`S;T/ =wɖ) q@=xv @t-~IC@xIG=wIBXM.S118)\F8cG*bL;"R6;_Ydb-OS9G(ZʡRbDxn,' e3"Lr Ź"ms.qߟKs\jlq ,@"%T4ff[ք+!rʈ*kܓt['&$#H#qXwm!)o鲾;Kpɞ3C!0Q\:]g"#w?<tD0!Zi;*{T`ͅ{ou%8~Bfi1Q5-|u7K=nP hDDD/s)pYܐ,'$e7p䆣EYpW׮=X ͱ!WKBFoIw;^|-~@t.`zZd6xEz|1BQY7}#q`nZ+旼 |$yMSsO՜ Q=nocY#_ePSfйYq""S(`Τ$Sy.y*is!3PJXֵ+v>~D#4x-a cyfri*1ɐ(c,_f9'@XZT*E$<(&Z pTKf\(#3Ü" bN%gYi- $6J@2EPFuu\/dtÏ.yaO rJ}iSG q<Zh+LeH:/Dq+LZT q2%T5˔_Bo)׽La%;LB*_L Es*LD @Wd&+3P| ҽnJDt݈>}0vfbO߱cޥ%^]w05S;55/߆nf04ݎ.8~-,w7)>^&k|5071d&˹kK;Wnv{r- \ym+'WiuÅ̚VۛL URʩN<1U6ڽ]<vi%NɬHY.uIV @?衖u=NA%0Ɖ&Lih ))SGFskc˧U% \M pDY,~޼{f:o^mT>뽾q|]jN21{-|u09cŊN2lH 8QCG]\I9t $c7Z.{Y5Suz|->q[ݒߟ)k>ð^cg ݾ;RIjܡje;Hk em'b)5z"BL Bx"':`s}HCU3Tr]XmU^֍ֺ1^Ӎ>JK_ CKs-ZlZՃx،NUTSF5:X6ŦÿJaUeC$=B\jWP߁%Bg? gdS`-~gv"B>xFT`XGM!1Љ}FHv;,B+2-gv"B>x6bJzƔ M;rk,p=޵>m$/dCj|/NH*ݺf>@$ăW%(ӿ3 gOѿMp`ǜL9ճhWLH*0;5ylM/ ͔ΔdFQLR3eֺ\ejoRs2S[os)ڣ 6zT`JBЉn(:ڂ >Cڋ%iP(2A (R+KG9`@# lNZ6F#s ja|͠f"Uh̔±HaAlISqMitrgX M 8Qa9s gx2 Rs씚,`.xb hTXqXJE Ga"q-WP0HTAѾ ?G xII, ݗrQ]'+lM5o+l( [E@)=D M5O!k;)qLR)=D M5_^j'MѥFLVJ/ZJR*nj7BJQ=HSp[+'̈3RX#TT3BE+-I)bY#1;),Y© Rs7BJQ=HSM)i}˖R"줔D(I#;)I[)D) ލP=HS-0D^k`!nSMW]9/@867 —ԝy̆OҒx2^u-͊`%Wsޞ16,ޑ,fn,' e|>}Q/Yrsja4Ue2e)*pbiR&{/m2_Yue_yNHU!Y4$8+*X $v/N{ _rr={p@A%&0\xb}}5N\~>NvE;yh+\VgG"xRC @4QJ _ @v~SAOLP^WO]w~Ed6_]V.w>~w2TRSn՛Q>&Z]=]BK~¾Sӥ! %Mq$ $@H)V\_b)9S",ْԿܥx/2?ѵm?VOwelzĠ|X1 B(4[DyPE}8633 @`/YIo&mW+pXAq9i͑X@-m QR%9/fDorm 2 -'&3I&֌ gOęHލ7/]37R NO)RI 9L MF=c.g]/CwxoIp6#K%ދY }:|ZgJdUw erU8*[ (q>V:uʾ}vXino|?`h>H8G~ChˈHf/bKn Ћ>QfƔ\}lCJMnC|) sׅ+#.=ͬބ WX\rpE\}uB ىD /Ι8xL+3 LgJܧku%KRk=zd.ux&"ڬ0}QBAroDp7PElzAj d.bJ"SWDtW׍:-7I9KqG7$$Lto`>N-@dp yAh6& O[N0-D"޷lO{Ƭ,"߾lmMDyrNZ ~8yakYd}m; VP,XdRay3S=HS-p"xZϗh=cKFѷQp=(28ٚ6B4ֈj`Q.[KB"{V4 GZbE(0|ՃWT ^2lCJ"雱Qh=ZklynCrVIlE!gX8棗ZɗA4Jِ7Ǡt lyNn#)؍ˉz2 `EgEPӥkca'];{{뫫¡xIޠY.'""'QKÙ7Vn:;]bٛ.G6Ve淶<ܽNlVSn&tm^OP .qp]35,}vl},k^fj+@2u:H0߮yz):"RWy|і⹦uiW[9Éir,kP;V/]'߶_/H<<5AgfGer0M=ĢZޏ7{Sz9F`\W ;WעppF -wdy @a5opzj_*;?+-=q2i $J$g<S + bՔe0ε"a|uu^C h $҅@IBr&!04YL")@f L4?8ƪnAlĕcb G[u F[11Ҵa>2$SDzItYSG {_L&;Y祑; g6r&ʸsUID"i2x`M:cວ Ӿ[by ŷ;Jirsz%;!LCw"Oxk"xuq2SNّ8ptnenesrxzX9G$!=aZ㬓ZOXWB׳naq,!LkzkH'nWth|+?Bxo V mF5.`f5.{+.|%kJ5dƐԇtt+H3j7\,vԲJ!5@[(2{&_N+uv"X[Ϧ6 fjinL @E>K`3mR)5z#%%'ؕ!KӿWOBr%%]oHW}1~?C. [Ifs"KQr _5II,Y0l6]]U]Y'Cn8;զuunn nWhvvapg녳3!("nʹ󼵂A 嬊c[E "Qm,A$i]_^PܠH?ޑ]X7@\{ט!r?RJ8$PScI2Z{^Aph 8.Tl93EcT_nS-pRRbTR&~(<1X^z,)PJ>むӚ>0m&[$鬋݁=O99+/9PYJEf#һWˊ/ȀJ$_W}b[%ӊI/g&y>Cv e@߫ 0 = 7w,*bX)~w{ Z n`bȍf O I5XO1ĈX{o1e>U@ ݮO14H0w\t]2IRl p6D rv"ǩ|@ҸNxbC,R,oqTkELOhc,R =E4G ی+DM肖I"TThzbkCTCD$]Eq0C1eP6piNkGvi̼ڲ {@[9֬O~+dJQ} -JuH9|-4I]Nz]O%J<<)6 Z%3gcrN"_v=Lذxc_״{#ID2m\}+V9r4&MFdm ҎN(m@$`m0)vL7uS-(OQFnB .0;t.i&G #d OvrdlA+iuk[p)BgaO3cČ kSk1Z%S֙%UaS>x7}7*;+[JN>Xt tymzQ3?G-Unݓ(B5_~k^]l5ʐ} nnFK4L4:^?l/>PۍNc7]ard1Ŵ7MoVRe{AKcLNpGF:Э 4׋;;Eys˗"5gꆸ`14c6i(CQ`Pd\%(M@#LKABΖh&Xnjg\C7vi!6kyvV.dO]ߜ=콎@KI2, {`13b>]/9̪7tx~ Jju~cC]Y"YchXu]v!$Ix%O+ɕ$KHӧv&/ gS9#yzM YPru$E:|Dv!,0|b3-,:|9U>ggѵ$ ^ꤎnNUN$߭TXM2(\(壨TA4B[az\+ yAtcd9C3;Jkg2w(p*rsDRR#.[~0/zVuEWS2jY.(U=$>R>JM '0f1|hٿЋ}Ƨ:&{!~UٷNeCP&MP~[񃍞 6::Q`ov8WnKm$):rV]\e6f!`Gj_ 5ͯFM (iW(D^Z\P2eլ$tA>c|*{J*7)DY Dݦ^uĥ@λ7a2GnacaP՜ 4F^G+4_VP&a6ytki_#I8(hJZXMTCυF@,r& _="y-En- tFvT놠˝M'=J2fv&|&+4}RbH! *wo_XCZ"}:E.{j;Ȋ5֛,G# P Rv(;sf3" !4 e&0/w@L I `sL#n0c'<8qQE^i`z+td?w6fR1ҙٯXkQo9PZe߁qql 9>RgɘikOVB,i={ʝ'<.aIZ}*#l#+T>;t(x]2:OsM9r,NZ3sYĨ݂bjMU":W)/Dwva ђQ<;n.o**qЦ~/6]53WsE-j>"ƌ:^Ln!8(JL;̇PPa\?u,:,=ik ?tڦ9˗UOb0u|3ECj`U r_bX5 Lo@!{xa CN2"dk+ 35 ՟ `ehCiRNQLWwLd Ҩ% ¥ {>EsSFMgN,VU)2 v8hlj{sv0Gyޙy洩MW >)p/hrX2ѲLبdOg= MѕgB֜%B1Q,Mwzk2'1iSIzxv4,qvZ;5WݞQ9]Eք3$'`vݬ԰3L^#wz5۝*ٜ~="$ SDcHEyRj0R3b*=‘@SIqmwi۟c}_{z_rg ˇu_'U4A7Wt_].OwoM*t["{*:gxSOˑڏ.'7tK&=[ w5;eNOxt? QlΒR6s}n%R̿q4Z>2583m 2>k|e~ V[9o~;m&;qq&=]''ot]S7-& ܻ/m )p` I&{5=St%Q5gnbL|c ΊOj;sQQpm vom5R|{œ?mH@U:Dę$Tr6o^_Nޛm4NEIv tfZ՛( 1RMll,U<' }h֍/ݩEh,S _A̢;M6u4^|ZRaw`gFBLo'紿,dj%} eh4e c$ M'~OBq0牂&۰oғ r_=|bR5[o&|Rv2ۼn`fc!}gvf@T%J+- V;Y«'17w|LC< MI掖-Kƈ%!~NZsmJqI/az" bqײZͬ\BJfgҸ-^EYS0@%r6@R#e huW3KSdc$`Hg zh:ɸ7>lV|'_҆ E]\PYxixBuqGs3 tSK>|`uON@ N:`@,H0& Lޤʕa^h AFRj챐pP3n +6@l8_-cO" "+ CDoN :Pv!_W}|-D+$K㜾{АӋ>AD8;k_)O/A:Dx&ܻI1zvPʹD T4߾0(b֍ϔœ>.۰]YN "5!L> W+ػRhbCp4DEY& n.@,%f La1Ű  U=_hJv n܍[?|UV ^7X OnüS;o6"(#0.r8R6\Y9}Q"g7+[.vЉB>@;sL$(5C#´eD ǒjyޮDC!j߹ܡPvLpMZЩ$!d:ֽ-|sPxÛ;iT r5Bp*Mk즚th=*ؚ6Jrd)2Rh6>ZRuc,6RRV*?WGSf|?c= "Ͻ\^J0 {q&.SD 7ae8ܱRKWdNJ7*]?X(]np 1!!T! ]ms6+St@jr*gWS.Eo7%98oHpH<ޭ][!O?nnyBè]ebʵ :v +n ! m 3̳ qKN 8 ӂ9H O)@kl&̎Ueth4%kC" W)3WU燒%嬐Y9L:Z%hUAty6(,Ki)-\(8ӢШ )&@6Udl4 ~hy6FٞFZ[ui\ӢJl5-i!d.yx c,-2Nȥ A!+yĒɩ'lIoF97R%y{ .T70($УՁ4JA SM+x Fh%1hlwM I瓱 Z2"jЈܨ%"UYKMr4<_RP<31դ#kA}DiW@(j^bXGɧZF KS\[DGG댣N.7~3Q*d}iɦ>.'m gB ؓ:*ntya|hHWYv>z\3 㿆CsѧoLG/V=Y yUjoo+ )hyҰf#024 (c%tI5^F6QP W3%28ڍ?\]YLXmc`~ln=Z#B#h0B9go&~,!<9w7\mW w|VMVJz% d װ7\/[F|zg;d(h{8yUlk㛮瘥7_sۃ e[ӽSyOniOaV"\\Z\$) 6, wf<8D{3@UQ<X_;u _n;4۔(׊u_^?Hݵ=h/0Y;Z)roG * lwOgzb+88@Ԛ^jG?VB=^}A^o=l~w;)fחős>nR_4"Q修۔^]dR}բ+a?"PV ~@'u~[hl*T>e~Wkұ~$&];Jc{W+g|Ps>:O!8E8U+67V?ntm9u :hb݆H[|A@CqFqJGk d -ľurX뤹ko9кuo+Sf4gvoK;<^_γfr=R.>8'& oѫyjW~G!hO/]eV o$/3V4 Ly7 G 8 dd ԡo6օny񠍐W3$ptW=נl@#HuQ&i=]YϙA .dYe"sZꈌѹ"Ԗlr%i=d@&fAg'w1UdZDuK{r1[*:JgK#=`49Z3w[poo§u$W?/== {ȞOӳd|he}z;/oD2V/^;a7Lp3MG ޒ=BYmq5lMj- :hb݆[ybe n]0h}fl- DunYE:5 hu!8E[s B @^); ,WORCMa;`O7 )2#Ke`z^߾iX*R)=tY&h[ZK2wQlUjtY*Nagbr,}ZUX,R^ƥ|iVV*#KAԷՓ-}JUK†TPb,6Z~~EKQRݶ@,}JUX,',W$gwgM'KV[|y|P ~4e~Y4rw󉻽=|uY^lԂХZZCX-A-:CϖQJ1ΕCؠWjCU,W*ALW*إJLN#ǒeɱK0=ʅDp*yTr4*CJC(БW'3Ewu2S,|tObt9FzrVRzD ڝ [s_$QHbl-{Aݛ]ZVҤL\Kݿߒ]_W~W:LY#hJY.%EǶNcϙT~3V!.+vRݟBZ_ R*+h;[Q-NQlTԿ8h%X+~ץ[VsmRvVgdwժ|{SlQ? |Y^Ko/oN:fP ??&p>/lվ;eDf Lۓӿ3ϩBL<\i$J#kϓ pi۬?]]ѳ6=k < Ғ rqn~y ݍԂ}W $^YH8nHTR[ybaJn,w*bP’Qw4)haObknQ5mrC@%+PVRs?4(i5<Ї!\J[Z%WSnQOz}2y4$Ur c+3q~5 ^bLE+GW+ ,U!&J``(<8=vxM, , {i`:(ϼfŢp+^PmbwlQ(9X(\T{B4$_ņÍҪ-yo_ﴃTȩ5v$ϙV/:=R:;=󌤛k[n6v됖{ n'^Eo=V|W!9)Q|ľtr֬HSFoFÐsL*G1c2@ w@iYٝ<B-IYBxsfoڜhڑI $Y7yZ)- taٻ7n$p|_bq[#S&m%:I㋳H4jGdX3=jV*k6wk﮾z6pi~_~?OO˼Y>'ۣE1V"߈G2^C+"*`m4[aB}~/ކۣ;-\Ʋ0~9&u6r-܃ۈHBp_6);e L~!@%O~5aN6g?o?X(o6]S[ud-t\ )Y,nqx{ؗ'Ln~'Ȼ/jq};כW_ԢΪͻH' IO"ڪWe,ƙF_KIHiNHYK)n(Xhŧ!}}"g,DpWsǓҤU{GNZ[9GQ}O #L ;ISB  wgq,Lye^(m&:!W6lͶ?/rɽc'4U, "<2|'2ٺ|#X+SzXOb2m/߇0g4b  I@*fV:uC<Kk<CTlU/TؚWol˵^w/䰚ڸ;  ':Dc')PÂDbAf"QINeTHހwuN(&{o>3B `V! B"'kJrP.,c*>j ]=jY搿6YMi"I֗ ߊ/=J:N'{ye4S:&R xtX糴Uo8PړpNfj c.H,3#:@LaJw{ʄ]̼]HSV| dK[|㐫}7`㍳xYlxG`"+ Ɏ0/]b0I8踋6Feӝ.+x藮L`#3麂\:%I!.O#вHtX And1Fc" `-FjlXZZ ΀vvºpx5Z&܂7[-5,k唒LPB+_ cPp0 Rlڳۙ"bCc15o@\">6N׿'G~|gHk(3/0(vE>T`LeT1)ZȯH;I݉hT\}u(u :v3x:?H$cɌ"+"НIOᎶ6}qvebk7zq?YNp }O rAs{6͵NMskZԿZ+9xzBima}kOv$lUVt a<r̪zwCZWZ0Cp͌>9!켩Ko ]_nsI%b~c=`Bw竓UN*|YֻRC%wI19]d8ϫ/^D#5/I`M`JDa82AR0,xf0/*@0 0 3-Jis*HLkơPyk3+-ڪi)E30o:Q+3]J/wuzm%ҏīrO$ȭ) +]T2:9D=6J{ Pv|_(/E ΁ ƙtʴS' 1SVhxsL+w190ӳxAX2`-BQ@59:s(AbH8|3,jk~Ӟ/Z-)xp}Ť|^hc.{ Y szN$@6LXIǑ3\ZiΜ`#VU0BZ$$K AɡQc)Ղ/Q'p lsi  kl7Żr<5vgUP p#&+&`= ,E!!H hS,@;ҿ̋3Ѻ {/yf'jnN=ݮ_})[||Ƞe3 #hk5sFiąB%%HŀAAJU bVBpi#s3(2?JR!4J\^`+" n] < 8{b:;{ވ`%*Tq'Ѩ B)"؏쭻JZLKm~mEg;N[p]J~i jl?- )tt8p)G_&n&_f܁`65',$:(v8b\԰W3V*)rk {!OS)*޳(εLc"Ե&N&͞io?DϞ)̞6g=S.3iz&@F ]В`!G s8~IA861 .)m}q|{a]N% 'Ksl@{yy'hE&A8Qƺֳ3v@3솿'̛%|o)AW,X@L`E0D5 s"VK7yT308eΚџdwWl`廫Vݮ>i2 !\xD*^<R n(x __ NAhh=rƕA6ɳUcX. {fR 8Bh UAQMp40F`.R+BB)j 6dS+*{RO Ϸͼ}IvYb)t9la?͗麞kovb>}.0, juN4\"ihص2Hgxǃ{mu'Pƒ_$tA$qCg0猜$k,Qe1"'r*^OOme"5bFyկ]">Sq [m-ѱQoV &alLo龛?U]ot[sKlruS< rRr>x\$V1BKrF/Փw]5֑gop6[ZJϋdW(hjGhLtUvSn<wh1ݓĐ8vݺo\DdT-v@Try#:Hn#z=Vݲ n]H7.52g@ֺ'~,zq֪ =OMI%>wW_v7yTz(2dp>\1GdV'AtvX轘{ UE*af~ZICL-}E)׉!&k/0 !$V۶r\9E}6#M)04X]O刑TB|rIIVzx8fxR up)Ӄet(' ߊ%L͞Şg{ [|V;_.{^\cro>ީ{|gkcQ^[5Pu瘾.~w/<]1CY.B=` v/eRM t!!t IZYK$& vBd`@/ܲ BҞT:G:rt I-؊TĎ!W_`J ̚'r:fS3"Hvͨ䕖Oktۓ@{+օ[_.s̥R)Or*s!x)& h\<G O|n<1ݝ5(f Ѩ&bTt^?hDI`@뷍J%>6j9 娋D@.Z2j( pL1TtF@DB*nj h1uoDbҬ%-HCy(6fb>vAIz9e}i4~~>,$-{ٻ޸$WrUկX,$'/ 4eIFIH3rHg@៲}>a'!;k~w ;Fw=Yj`\i6j%2z7;G.ذ$d%5ɿMePϮψFϦ:ځϾ/`WNN"<ĺiVk%8f&YG t:>#*lil3:f8%@~b݌n]1NgXӚgo=O"к NrFʾs)vRܡ5 .1\-|h*@z|ڻJNW _H:*WͮR'bo<]*s67 A.~;F}8u>9{ P%3_""sH/gf~EE+pRΛiv]|yd] Ȼjx7FOV߶BM5ՃT)׆)T*V\+ѓx;/_VidլuA. @VX;_SQG>dgTHi~fDF6BRZaHh u{Uc7Qu\0f)gd,6elD*s K%(BeVYMz?Gn)>[E6MQ#jɱ!k肇lt=o~}.;bγ;g7O77V|`هuЎiooӻ_4,}`VtO9݊w^}Ndψ-,Y=p2u}ౚE%Xg޷EC`ϐA䳕K:O8]bX2\fEtJaIy[R4K]fRP)PΠ˫9w Z96X%P KI}ޔZ2ɏ,=hR-%H1YJZ֛Yz,2\z}ItM5Yz,g,hy[JLۥ>oJCd)PK}[ItM}#K(X"aLۤ>oJJ}f)0l`)0VR %҃f)Wa,ډz,&<p6K c04u6ϛRz푥RqƂf|]}\(͛s~R}]B<XÁ=Up=s4qEm\B=/cH"9}v>K.~\i-*1K%_y{^7I)ftSھjYiU+2RN/u-ZyTJ5阮jރ5vjN&@iIPOm:L| azm*d\M^_e=Q ͩrRTkNyι[**VGZԐWSd1P ecU fύ_?Dxq mxa+3zݪeB=4XqRIG봻Vtiֆϲ*WB܏7坏V)uC7GkRL5pB;jChSqUTB3 y?)I!ljљ,cB,ir^䀅*C 4Ji.r3U*CNQӦXhUZ@  P@OZmrLm}P-bqƍ@*<wK8N'0[QA^*vuZŠ"E) Պq9;:&g3%nw0$| GHBQ .KJ26sC[ح3_q_zzr&/fpj\0ESGĢ MآZԋU@BޢppBX(+EAC@/ӻ,J6TM"59YF1YbՕXtBKd'׺}t%;*xob"n :hԓ\uqZ"68f)6Pa+JjpL:앏\jBЫRJbMagt L ]/Zwg}ghdjL z54i54 IEWcu{v)Œ NrLsnmO6}sU{gO&!~no %=%װ~u|v/W7]?"][}P~گ\bl*jF9_--[aYf2mIO&=[R|8):3e 0\MB7uM57I<12P.BE~#czj0mOnRK cqK)nt8as Z籅\9!5J 3L!"9lV"9-` )&mF0IzLXZIލd)0R.̙KɄZ">6KErGItM,=<3_0 Nۥ>oJLYz,R>'Ra,t?z,ERU`6ϟHY!>E@ ӰXZI-<6KTG8 [*0Ԛc!TVUz S Xa Z;R@ "#J`ZHTste)rmi2+-!)CRڬ;&uH>kc/%xq5[yJUl-rV_$Uwvį٫W_8W#NfEz97+V٥_\8)omy*kiv]|yd];N]0mi} 8cA9]HXG,'GMx0aT`prVb-j!"_3"MaESbߖD#qӱChQRdKxԛ݇T%>d(Y-Z)I?5K۹ 5dc1O^8%+)JsjXIx!Riaٔ<ٶ{bxmnWE+(j3^xf${%բ ԛF&}lr]Y_I'?kփ;=+I%遼q Z :ƀ,BH=!>觎Z(Rcr`yiZHl- E?,K %JZ4TGm!=?`H' 2R}-Q܂Fd )Xt)fe' ]?jX6?g?:-:G~&Ez-k_ g 11PLL`жQ((@3732.ieeLJYd\ zmLV֓h`ymoZ;t :a[/m,H5#U =ODAj;7!`0BG@QjUXb\CC@PC#uwze/b4aH~=GxU_=/#fUـ{|[>8mD-_{jH 8FQ㊺ Z(j)B']npAA9% \AkUW-XsYx>OI&졶lCmqljqv\\EtXDh轩ܠˠ}#kfC6^D18cGRly1din.o&iUO SPSP:"h$ PfhنѩѿQ+jMY9Oe{[H9X=e!#֗ y0Bx娅ϑL$@˝J%s@H|G:QzPAF7R/\H E/cQ/. U Y eA)D)4 9w.=ږVX$dz{8@ TBo+n5Eħt1.cX g@;,Żj#\>1\”2ٿz\F3N'0i$T3â;|3blbXVhI]Ff*'΅:}D5 ,dv"@q}@l팔nI߼ N1ԓ0*'MFUpmi W aҊmm0QHZ% *+-gOR&)& Zy˙(6(߳<;3g6 HJeFnQPmҡUT 8Z 'q+)& k Fe`F"əTy͎Wzm3scPSqR 0H*Z qgnQl&5JXIv<$ ?#4W'$޹jSI,nР &=UZmV"E猌̀i)B`l֕.T +?^m'BcV?x␪M$pRCjptֶ~&̦]S u'|Y^gl?Qmz{>zMfV~7wŪѫ bS Axl#>^܄E+aPgmV-Qay75+ 7yjmXiϺ55s²' y*I:ʺiX2HQhcFtWMm5u~(Ѻ!\E;TT=]VtJQI wՈ֭sITփr=^P+=w.J:?չ(eE\ڞR(n䁄c@<0|d 1ӣe&sL @(^x9Ox=6If<Y9)nXh'B9Z 5ah_qN'24A1[8cMP` / ҳ@VuYãeA kxY,ӣLFOY- AF]勵SX ׷1#_Kk%yN˗ω#sN:MJSZdXރ:*?Fpx@$ ʼn+|&3! 6cJMaO*(O*ohZ_BEs .C, \-ϣyܖ2ܺұ` y`J#w: \'ډd[- eFe?mړdu(Yz[vx\!Li&}\806d)HmpE辙 Qd O.J\NrQ+(8\&!oӄ1[6a|y0kb<0椥jhin!>L$)yǤEnH&*}yĮX,K7/=m0ᖾiT?eAK9^,>g_>!'-mVS{_1aoDjh֞YG3hf{?|p$fVfHt?ƽH&uKԁ=מRk7 :(jЖ#&zlX !ƻ uvDe~/U{U쑋wE(eV/aOK{庐j3J@}{(-}FtE5f7(SI.#z sLM^$;U 8Q%N<92iGl_s l;^Z Jsho[ɼ8ڋP|3=~:vŗN:>Jw]]/=u;_O aj9a }ɱC-ޛq쌋13 "%1`eG"A2~Y7U}H>Mɵྐ`sÙGኜ;^@!ـ(K/Yy| UTՌ00EJUH!XO UX/ MW`$WBy/mG^k@]*Tf#cZ__VA6b'WſVUOw'9#~O#N?QFXay> _~8Lni迳/k|~Z6} xlRŒ)p.O'gVIMհ3 7g7kƊ5qUVHfxf^E)/]iPZ Wl} Z( U;RO yͫO,.oM|+R 6ܢhNZkOv"nT^I&*om+~s|>e' t]3ȭ݋k64E2;#w~skQo1\Ɍ9 DeΗ,RkVBΎj{2 takL$!S AތA1V $DhoB̈́m2*A; o) n@ n90c%5h-7XX P8헼( o-K^h 6H% ,E|PS-C"LvWDzaQ!"WxFhnշR<5?uvMcqx[{l4[XD,L%X uL6 *R*#env99+EХ #™xk5Z X\$$r/ [c>KhqGl 'Z.gz\{.֦r E TEH?nжqCtKa4S33^\fAs1Rbluqj6V;IrR`qĻ3(y>7x[6q}ʔ7YJ_ErMf7Ӕ.f2x<+XnƗ+]I^k/ uQ5Nyf HYܣ1Ehy΂9:$_8l. K>nBz-y?%xE*cƍ)Z f '/D tE W:0fI-:0<ާq[dz#M[B䁓a50`ce g2<<ϔ @rƲ=xJ>-2m[S Bqİz[^: _ _8lZ`&@0M06֭7?ؐt+m$Ũ&8Wݵ;/tZ>Z_FJSdG:S\Br)32> LU )U `e=^5P쒌SrX%0VަW#bp 6p52^qJ(Sƨg;MmȌD9PnZl fv3בI dxd9{n(.RXqRDmXdLʢVgi?(N-p =h ]=8P{{xz{uAmQď>[F'~n߳5=5%deF(FY Wd2 ].*%K.kjih,Q/XuX büZgmzy" ڝ?_ 6POX ν^c㕱Dfkl|x'^p=Ї†&i}l͍$>_2z@2rUCь}.RMq=/$ ?o4\OjknI~Mk@gß 6*kuԌe.eoQQf^7D]ajp]M-lCt6+`цKóόʳt j쟦}EN1mYǢ%6FBP%ok\@Mr]Sm,Y7w9+T}'WO//'4 [ xRۛf>m}<^O97lK苜O2֔}]D<ɣo5rf} _=o`%o{*)t'@YnoU|̐hY@44^57؆H$T5Rqdv^uS>'Bllxڬ{ g-٫[|l_Z# Ť0ѥ5Tłзm Mq!A&1B䖫G Ekn£Inŧ"͗2%6W̊•GK]䯥u>~ttzn7>Yi]w9'+9'GWo=YdUgbʁyf *k'_}p6nC\SeP2أmW;SLlgK+e}~G .Xob<>WM{ BaG:I$"l0HbL?]0\9鍮8 sI*ծ7MzQ!NkJf?˺Y;,+{? t%=Q'KŸ<TEK0Kb~\dnuIor FfJ8A9IT1,w7n>Mxq˸fwĐ:LQ>4wwg-۩ӯG/(Z}fIr)%@ 5R;H&l)T{M;敳4 1*峉w՞n$hRiҥro*MZsJB=?4wwgml+aU=DAJ m)ݧޭ]7X+;V6W UT`QgLYW{$T|W}?f;/Wu꘾]S:_C|T {i{<]iTYWۧ1PG2LC w#Nz*q;B7H%,wߊ}I*ӎ(숢tj8@8{tS/FsVZ x'}(oiK~ Girn_%䇟KBaxwC2 Տbː?ouǙ_̿!$l-;Ǵ>r}vKͤ<D%{¢LgfI5wi(Nkܝ?!+޳gz;;RdCAv;g!UZ(c}zi}X<9 cjP}A쯛j.ߓX˅]P+I#>[lG"eb7a0yR<)L!̧$9["FR b2A6y|6HSA%Dɯ/_7<=E'>ez^< O'仕.ß,ʪm{|YfK͚'!ȬI$F¸6`qjy.IiT]N^K'e_> `-({i+n"HԸZU*o 8 ױʑԖ[;5Ma[52}NT&nIzڌ>;@8=8+UfK;C|PXZ}&=lkd\2&=l\++ >(.=Y[Arc$} pa4JۗWEU"= { !/ItDg=]_g5W\vOc̸e%7gr_ܛku|/pӎa+Ĉ8Yl֌# aF 7Wg4Du:iXGeQ]TҰwDyK Raxbm X# xq4j&m(.FW1Xaos!WuxoKD5 rqnhf"uWMHk峦|4=jG^ K*Zo> 'ɓIavNN$x6~8\zA杠$S^2Om:uz}[aKQSbW I{!2FdaPcN14D5 /1qg`gQ&MVd t"If@oJ>]JixvxӲsUs,%ԈQ%_2޺w@!8`YaVY~ >$g\IS\#$(pY4ۗoj\|.g ta.mnPY4 _Op=4*k ]ڨN[Ԍ+0(ʻ>4wwg-ym˻0) EsJ`if@W)}RUq庑TM|vq4%.Bg.N~[fcR> ~KudDC̸Qx{MX;:n1#|!Z %Gl)*aʥL m2^-7CTFg?OdLFAgMZ7N(-x}3 Bx9i{"r@Y'@>|HQxI)AEm wF TH-SxUI\C_\,X$/%qY~3tg\}$!m,%h'"eYR2p_(t2ZiP& }9y1Q dq6E(e ͈;}iթ]/>!b1͸Z0<3Ɵ)Z5 L#)&CIרQ!*o9XuNe `D7xf!\sY#vR2dt$@ ZqEUPR8@w P5vh\y"iQ3:14d1XAF$7~R ][̾(uqUEf#!n{$h#diɲw/} '_?Kh%gѣ^s"@#^D3gcMI69dS>Ykl0 O I(ae{ hRGCIbq&]q/ГsΏޗF4ә; 'lAg4ϋ$ Ea 9'}์Kx5f~BX!NR+2[|mj#qG*Jay q|Tb>1g_/Q"k46!i7l dcƫ|0U L(+1c>9-fhu6g~̄3ROp, E+I 2P,w|LSH C7 *MK@O4xE9h2G\6D&(~m侴N#\Y=8C`msd5PḞ瓬#V.i|֙ij#l+!޸/CG߇gެ6\z)T_f!9K4\Їqн! r9fb:w K茜P&LfG;I9O׋\α?:,2@yĠGDϓ+7~}$(j?t2`Q{K id@PlGRɞKP\Q4#}M-[RIrf0AzΨSb[]Vp-Z@tdvDqVVA4VS֮64ZxQr:dqf$~ebKt ^Gۏ}[PC'54RSJ&4 DVto:1J}̯fә/roy1&T0P-O X}ݲ[:! ba웵 >٧` J\В] A 7lL:S;yxg`I&_i+ -2P`R pD∋J{+G62c8k|@f|ZfQZ1kٴx*eyE֌h$VP$=am5z4pN'j85%b":@)V(2 m9wgip_}r;k<Į5J\\>6*d"D:)eSٔj9#TmΜ"U~R;}]A !HrBg㪷O\|sޠFHp:>N#dώB3b2fa;n 3ޚؕh !a_ V /iQ%`]0'N'K U@n7,K B>~>j+ϻhXr"x}鷜>j-Kה=KeĿ+knv-My7YtPR_l0P3&;*,Rr.z(wvr2$K2z8p,єXl=ElI kAa!6HFylφ {6%dEqQc/rEWjWNXM؛\4_ƘGȿmFF["1'gc? 0BHpB8/roM7.e 3N[EDJ3n !0HvWnn&ڹg}E+*Z]d9KxV! QFFmi%*F |6E s꒏FHfM΄e:6 "KH25(>ۛ:ɾڌY D>}Ea 2/:hBְ;ބ8x|U_bvv/>x20Oǩ#R#oM`LGzlH _)+?-9,'F넞Y N c3MDRJoLStD_^Tc " ƃA`EӼ}H5 <9>>ÚBucZRNZrQH0#-Xut":2 M"^o%񼜓APURz3-B/{ n}n\VAceͰ/Io1>,gZ@+osaܟ | Ł,lğKNÓ qp$&=e-RHnQ*j/$bO%@LZ|QvX%D[maWB=/𵶦`LnK!aKJJhR$Jm7[ 3̱7% <ƋQy!`Ea/aLaԈ8`CcgX1F"\~{2­exhEo6߰St]+<˪OfASE| lo sh}RoXFk m+oo`e@7Ӫ}M#):uxQ A%7Yۂ޼]lz۳6ֽ j<.M:Ōx9D? ۂn y]FoKjGNe0f[DzZр`sl6aME!aϿ=Z?MD􊹋%^MUNOy.BB98%֕hFjn ){ǜqi_OnBUt1 \2f}i13=Mb>5&3$y a0C=_:XÚM^}Y *zk Z!q }ERToh̘Z'~aUʋNnoxNrߨk,ϣ+Z`ܱ.Ɲ TBN/\ߚ떭pEl#+JF;xƏђoWX\ *]!llB$Tx&ŞNs쌬d[64g,(= n{DJLaU&*KC36?hGŒRӽ] biGϦӫ>y, _۬_Fu"ی!Yź riշ15'VQX%VǛ_U'_YxYR磃W4e۹s 9;n, f2wADVh/Quf4/eYQR~ySŻ)RTAՕO\baWCF6vl[=ubܗ<*<Ϣ0NcT25, AwP+\<N,Xi9lڕ!حb;2ܝԜso[.;~|4g<"wٰױ]űw. ^0} 佉^βg9{npAGU,+TU +dB wЕR=s,>j`y9ϱl1K oev^XZμ^Womw`-OB;^)2J) *"{ !pǴa:8nJ*VQm6}GS֞ 6_ߓxN:*5x ܬp fA:*ùO?n{}Vb <d&ˬϪoΘ?xUօF9V| (U:lkwYJ\c;~3ǟS/3vM M}Z;(hF1 ee,bǃ"cN@zoyeOXduЮqQcw!f%m o.Z$lޣ 5&/VY[-fZ\[Yu,ʊ~]9 qQk{9LK9~B+ !r6V:(wF8[9K+9Z Pׁ lw߱qUssM rސV:TI;'췩Nռ׭}7:{ 趯ہXy"kY*v"= Z*j'G痣-Oo*O|;q.ەlC35k/w}꫻7h=ՙ 3㥥 N\9T'io{ƭ?5sJ|̌?tNMm嶣kc5GϜ+^˲KyWiXYq@$Hѥ$zŕv% R*mcR{eB\^ #+8TVucZk~{3h (ntgLPFkoz9ן{yNlM(%͕)6B ~CVR&Ͼ4kU |*ҧ L%Eȍjg7'opj[Un[@C?wU:c^ؚ>,XM6M5e >iO>jCOo aA썒a?]3]BVaéNTKAV2M|1EgmJ]t`pxiX }7&/gmyJ8(LKv =vH%bCrCȨθ+ ekZ,F}1&Xsnln%v f*/GT|8}", \9D( &a*((aBfVl/ t&6P}`AD RRXoUuI FT\E[e#"QfC-y.6(4)GKs)쁖O:1=p-L-.eBH`j A K%1݈D[iy4r(K9bP-on } cA#K#d:W;oeYjCs^T\Q>o`)(%T1|B`KT\\"J8sD)J 8\pF5˴cw%hNʪ/yj+te8{Bb<%Ö́y-` U @CFP4|}z 0&z"`R(SKtGma@M:-{~RoW61Ä\8po@+^zoJ'_2I_ J}}B8%9=WK6C*"hn MSK-ڤ%gr_?[#93(3a.@IrnKS0f/vljV5xU:]-5|k5 -`/5&4c|OGnp/YؗxN_҆V_b|JpȘK=ST?!ϤAw} I0hpJ{#8d8#rz1Y `\ (t|Ԡ9jDW $U8#Կpޘ*5/$eԊҀ!g ?)"9g0b1JeΚbOGD#DI e7K qea8Smqm*R"GFtqAO@8#60BGFȈ.g:pF8*E9(1J}Ku/ RY.Iᢄ k/}]y3\]j P/zCJaV1\ "-׸y>3Wmb1̿J(}n#P)N"U-- Y.<]ʙDk}E>+Mカj)\r ezt%hCw1\kDm1hŌM>Xi(u .e \ "i 6=0"v]'#++c95v!h҄U@[u.;{'r9TOAޚk [Ie4;i\TDwG|7R0I+CBXU hXPl֋@ ^n1xP 00xr.!WrBt|b&́*33`2B}6Y%ZQgHڄL.V)Ij6|cT8Yq0_y:GUv' C8Z{|[js 5<== ѫLV!?b8_bfbOAZ0j?]gOf,UWKk53@q5zTXžMR;[ _v~6 ˗WUOW8WeGIZuI|j'w?^z%N 51ͤO(';XN/H(ԓͫ/ϫo˯tmdY_gniYp^ Uq劷(k=) yK׆'? Ef^ym|TS(+4oR%:) -&OK+ãr&[J6TS:=GHAUos b꨺ %kdW U4ZXlV w_4'߬;&qWCO 0[3Ehbrq;^TZɝluz{҈^(.mc<t$x1)`'[%"2R?SkRՎn #ѪR6Th"]߮<*SV'W+OϻPy7~u^lG; (kYG_L&n%<5|jjSNzW/+xAԅ0@ KRx.$F1RMm0˂s5DC B^sfJpFf6G LxP'y054bi3ձ͋v~^LꟜpA=јmAEjLéV)S"*Qz|zezqֿ593#A4l3s#\iɕBF5+ h%xK-4(VN`oٔبH(t`V8/@ރJB{R6e[lTlT]Bō5T[ 4i÷=3O.p0z6BѦGKmFY JO8_$!ܖ(h!͂E ndt ǐj66~l܎}m5$t@Jx(kb#H3R9,xeF*gZr*&,פv,mוcY!?M/_fc\e#ϊs_mε@c+o8]IKimC+g%~|?q`twmyY`Ґ"40OY,ٗ 'uFsd{&8C4vKjIdoĒŪ"YwEeEMncs-)pFiqq]֥=;ƭ;gr8s;7F |3c֐HPG:Q!Ej l1\YGI-]N#m2ĝ99k\:"ùQn(g?Z1 fnQmֿ-vIO'|*덟ܴ~>]>ҭ 4kDx\|ˏ- /R߼\ۻ+7fr6(u6FG[*֐cף:i_vl⃽orC^&Tl``-U!6l^bW}H7΢Z&JIFfYPIa%~Po~\r,iŴtTL|-`|({Xo>Z7^ ~y\Lv&ʂR_gy\E|jt8U-T^CB'ץL"|$Av6b;iX;\|\j%hgt)Q0'jН%/N撟إæ] 9S)G9fg#Kѭ.F_g:Тzŀ}ݏ:D*㔃*Y9\SGE~}q:-yo 59&#Q#qj'UgD}f<{x3}"g@GnRIo ` i$Dk$Hc_B'y{nѓLIMY8s.r,2C ,qe ݲFH+,!*+EaJY2eUh[eƔ]Ff@0pAPVRhU3wqVlZ|e )ي_i2ݶ7ؿ+Upc;?EdmH4$Yw4M[+d Hc>RH9`/mӌ9|G)Y g Ҧ~hx% *@vGY˲7}.:idr/p$T>jxf#ܩ1mE2n%W2%A xł_$$ @]WOҝCYh֩>\htucy-&1RuGI6Za 6އstxvhU>tBI}6=YoD-hl#ډam!͔D?O2o4CŦƹl &0 .G8<p;ؚ},Tl\^[M1ԲnpM.NiQrE3H 2ɾgm t_+tXB aܸ9_#2%Fb}{O,AmbW~Kc2׾՗$1=oӼV̖6'pfV߆ Ri{4sa =֐ |+iHv¯αnM%G2L(h+)o>޸}wn!FX<0==c%b :'bRT*f˥-22c/ 5D#mjd*B}1S1 48'3D&<6-_N<%+lmsLgg. ^r`P_cLR @遏P(h0M^yhSoUb&_ѓ'bC1$ TU^tS"[˵fl*Q'NlQ7ajC==-_ Sn3=y+w$8QtS9atKA>#&Mq6劍һ[]4ŧ :=qtS [B\'1m}inW >EM i{VF+beLz0r R +[JM\!o7: .1OU\v&&JM[{a H.~RC5x9{XrOeZIt./o3.8C&UyX$~Fɛ1cԜQlHXq|.@;H pRcVJrC^\II#y(FĈ_18w+bQ HPO9%Ia@jvQk0l)h̆Ж1@1FG“OR<9+@spwr,/x|קѪO:;Oey-&iWNeKVF!5QRb~FNGg8ѳ5u蒤нIeVƅҿ,9;'ӧfp}s[wOE*6fuq^ߧ n'@iD5& a_sJVg%2(Z]VY.yyeWGMqX% hbL|Fm])̗<[]Oiet(as(!whE4 r ۔l{P]~ I">mUq=~zl[I@#[7F1kץ'+)zX(B_>X-aMR=U ܌X7~ Unp:қzyyWotzlx&<޺xx[}l(Ȍb db$+g3r*pjV7͎͚I}?=mC$]y)9ut(D3G=ZWAJҩ BIL?l~hU0L3T* &M`]EţTDGqxU0}< f{l*pzh#zDMQCo"U4jSN!nAqJ[,bdT9UVs޵+bi0TR,e.˾ (%Eċ߷زeّnwVdLXW",m^hp.6qJqQ D0#/v,O-vᑥiJbJ)VN7GNpKJ=:Mo[o Ś_Q)Ƴ篹'chAYv=b7;si/`9%G 1r"{߯?̶;{}t\tcN?E)ÏNFu?ۃ6dKNc[ oi*?; t}0cw'e2yZJIzpVS׾3sY-R^Jf'q:}`}y7N2ZT cP81nZ#ey'ߛ80U]["i5${2;gɓt. :[znV#a{߼m(_|n;|/ly]]n߿q]4?ڦ|މf~~hNdȽ M]Ci- "[n+Ȼ[{ 5-~G%SMM*'fX̡.̣5& |3ɛvw2qk>/_QA'ݛeݎ3҂J;W+|-;E}3kϨ5uoIBdœ",Ϛ9 )x2D qΨ+a6  Dž)ll6Xl( 4uB(-.&3 2t I/%_OY{ Sd_`!931GN(F2_+Y Ƀ䟍c x=X"crkYN WZfld\E3@Ǚ323 *ZSFvOBg%JkƧV[@f"ZR`IT4Bf!5ODs`th%FGo,i0 R!HjI|l^`H"i4 LC2 2AP\'qOΡ;&11mb;f%Er" i#Wy`lU+}8yRͥ3264-7nԪUL@. )X ̢A؃#RYm :Ip><%B[Q$EXw:Ys@qZrhMHKQAQ}h~(&sjqjF PieȬ;E(YJ`Q6 e)qXF xM2b1HP꘸hnx4G Ľ;zב; ź]bLEK +eDžF)sSpn|XguH|< <11!-ސ\KOXr9ž"01 aҌ̱Y{>o.~ S*3(8ONu ރiM/ve(1^}29+ǒTPH]L$%lD8y(m ;C{`Q4IV)PNbZLj/L hO2pJQ"b05F :vYܐBxe($̾0@abC)b)@-Xؼ\_i٠!2C sRLQ8I;=#K|fwK~ ,2gk/J,ܚRaStJyEw/UcI.}[-=)DKQqu}9f)|f*Ck%7(8,X4!\ ' "֌PQFXgܣ -ADhOޣo; G<ޕ.,dRQȼLe3Ksh)cg,eC,;S9C4m>;Z}1qw|)Γ!k YaS@b `T% } p/xmԆ3 jϬ8RW8l4pt񸉨' tےΧtQ>Y0͂ŊƦC8W%`+[Y)A3YYOC#pZ9aQ>V14Nv&, (d.6 U'\!p'HxJwKj=zF9EKQZ*#'OћdBE2O=]!R:ܞgV9y| NȂD#@L[(1 i@GDOĬ &( AS)yH%lUh,JGȉyP\ʒ_-v{Yi*Ih%LsP8Jr(60vwO0uҹBw/Ge,iO"8hjǫ9(o? gsc{Ն}7Hh0{+dzX'qݤpӒ=Z.AN*O QZT!E`"|V$RhPx=D.D^m`(+{%jAC1J'mYJd%AEm0{iO Qcq@Jr"^JXȿYMA pgTULyֱ~D P^b.IR 1E݂s\$xpуSqI.O-2aW/m|̪tGg˺m~8L&2 @!Ҙ!4J vi51u/EX>ĩ6(,H9sJG2րil:L8$~ѵa|Tvx5gjeř+\1^K!(CSh6y{bw Jw``I'+Hk$ ;CF"8P_sQ#MaldigDbluٲk`DQ0D][𐼆EwO !^|g|, kVXf4fˣ´b Ġ=%MQƦjvisj~Qe*+&5 >X,PXR MicEW0N(b ,KѬ (EIC:z&AO1X~YWG~#OLYz[,T(4%2s)^QiFJʖʏM\mǾx>-"DD)X^/$PAk(6RcxHd/pM0`~w.cX\)$M&? T,Y*lQ{~0N #ZA"lk{ ,Zʣxy+:}_c}pYkj8 A\ռ^3c*z @kY9,:cQb8Yg)0/xpµbs>ů\x>)`c I!]NG+NGl=g8;.XB;~:C5t(Nگ7q(p)D7XmaU׮o<ʾoYiǛ8f'Rh,MRSb7[uqiٻ=Bp IH}b1\c B xߩU76G#dώarub`{'Ml]`W\^] K Ap9LizyGd:vC0n|w=pP׍wxח'4{3jZqr7rCm dAgmr]a3c.>؄obt,>ݸ!Hwh\oQSYO :HYЏ8gS$&8U^d- E@J%YcOgN og_fymT3Gy\ۮ|M^3;%TĽY=,;oGMʿ&Sk6[/؟t6 _f5mN%M)m |[*[sȽ hHUF"lܓQq!hB$s]]aصQg6|~|h) ѠBHA 2EfTl e~EI!?\q}:wY03_I2|"8ZVmtrPM ^퓈%t)˕E !kleBz`Q,iuѲ䤟>)wOL&Ҥ Z`E+,Rty[iiW>BI/я;c("垗 j]ePCpT8h:("𸹳ʘ'}KWd1[ҼVưGL4jΡo{J*\,D5?I!kHIL_H۟QǧKx%l 9U}9_LR\-d3ju)Ih4% ?䜩 p⻌(otr((1kwC\Ω$eyxiDB9?f|>8#11ӒƋ="M(:SUËFQ.QOԼqfWgSfg3EkFgHf bw6%oo` fuqO(険(񛐐Cj7' l-ٿ%.\5/P44gL>` ڗbpXHV[ƌ.c<%]j z ]VJv Mi>(E䶰^"Yj] ^|'gՈ%D[]SD4vgG0`#0{ B5~9؁颛~'tլܛ5Jc|'i~QTn.7uWKb~rY-sJ.9ZIi*rIPzS|(j <TVZ" z `͝w.Lf\@} !Fy-{jm-) |au[25 _|HbgvUK\A&d*|`PQ@0AëgY@+U 6mwrkw?Û+%M=?|tc P(,Ha9+~VEZI,l9;8².8)h*ڊjtsTI$ 5hf&{o]vQ^g@K:N{""ƫ932{ >E.2VsQ|89<]Ufۚȝ+-32RTRf]Ƞ(rB*0%HOu6w$yq"D&;7ra `LL(Vg/ SL(f.nVj0+]w_IZȀfw-[|xvI2_8n+r)8Ag泙zu$I7^E7'2F[*JOTQbNq"i!s+Zb=rrf=U|/ّS$?tv>nձ@P|Yo{3r$|ѻ_wE5<,vrzFGlQ$jU`o, J&Q/ rY->^cAٺBrhp2E џrvY<}sDOC*:Z E)\v/1l-S6$8Mh綤Zq7gZQlovN/v7u!;?aDԆWzRhh1nM}B{dR&k=6+&eȩ#dۖ UToa9Z|dЏأ]x4nP&4pX* ujMQQ@CRTjJZ)zݥ;^[mhf # 0 EjY@K_bTZ7ԊF*X P+>0\ ac?)ŝqy=z7\ VNжm"l֫"ɊʠN}#0=o㷀MّibҊ KTf, ~'v۰֣v੄Y/OzVt-ST_4cP rwb崎O) 248>?XZess-P[nwWz.zm@=6U`QֽrlmxivRh A9[VEG;+v12%q˖j+QPPbne&TxU[s t|uܑ{e|&o/W5[B8j *"@QdVF/J"G!dL[t+c>z~al *^iNX&7B&mj1'#!{>9qo_ib$ ^>qA|/ṹ/#|A FϏ_cdݢ V)U*p}[%ԖR]F֘‹R\3e }GMI)PjFj""T #ǍxC|؄ExJP>[vC @og\஡ be*1\[̒}ԎjUsp]Yb"UAEg5{A ?^'.X-j.v@aW)f9?ЂqgbP%AQB(Pk ~ F:x7d@V>ܡh;@0h)*[㔒*9R!m(u )@t|U@E(5-j6h%K6FZ]m%~[p (C/?J٩!+zla%pWG }] \CG5əgI{Ǒ+FZY4:rQe'_?'n X$xr!iw2o*Ah @U& y#"97 ^pBY_Cلk.4&QGNd7-jx+ Z6FiHۆC}m 4˾ R@($].(PCȃL.Ky(,4Z@QQHPhJaWQX?(E4Lg/`y FiiW] 6rZ`1-g=,=&/oœ)|N({'a&8  U =d D~食ud|EBt5iq]S焧4˭Uz(V"x8ƕ/F7\ic[p^mϣ)v7O p PɠzQxc/{C~ng?"`[+2w^~6.Vv, oualdսq=LN9DVI55qiM,nJ%V~"/t*|ppGUL_E+صBN1N#ϛ巗{)-e7+AHJ2K_DBN4!nDea*Qj䕚-t7+z2(x{?=>?0_8r׏/Lp)%gʓ1'I2N ɜch!r|QHﮨ )4[Pi?Jk*TB\2d *Ԓ*s uȼJMWͤT)I\B M ^(ʹA1Xyy=Bz0(1ANߊ~ ЃxEDÜ8Φ n黳O>}mǧ@iZHY~:_:ߺ 7vT 3}ߣY$KIbH$Y2IpS X)rSRJd@,kiÅpJK)ߊqX 7GBP6GVP+Z*~<x HEwjkX|pYG;?Rs9AK.k%#o>>Tut$\O`py{*lr@;]5 (,Z'4ڀ椟J > 3!rgrcwx#si1+-_I˯GZ^T]ss wi|e7c$@W̃bM2Yn6fzgwǙś_^-eJ{oYsսsq=(<$^Oz5hkbvʺyu6Z̴ Rlgu<^6&M<}R&CDR\e4:ύHDm$ic/ `)rB7 8.SHR2 ӀI84!N%.a@DiRׂSBc 5ʶGȄԎj< I3f&m$e(P*Kl+` FmَO [jĝ 5悯/!y0"Z *訟9ZJ/HTPXpR*BSqKi\JX7Q(dM&܀Uq"JlІ̦RI"6Ӝk"&X㍯@ $j y*$[۠B-N{V|AHiԔS_RyP͌UPrIJQJE/٩ ,Gdޕsq[η{Yh"SSPn_3YJ{!nEZGSJI߾@O?Nn¡pXe)UvN [|و'=%,gVLQaJ$ְ;lY3ERUJsݏ_y!> KyJǯe0p)995Tw#}=؀(USg4VQYn V6|^ K!M3c D+0JΈ6'n$R7#gB 7Y/:IzW.){(m0Ek -h-՞xC ҄Y}:F xIOҾlqu4jh^JsNE:N#᥸(Ū@Ɍxa-d5\ j9pyӄkCKoW`?Fio}я׷a>C:.5"=}89)o';C0BۣwWMʍэ2?NOÉb̜[DZ§Ͽ>$S?rִ9o35؍Vx\Fb,T,8nGqjpږk?[xn8'|IEyPW\,= wF4 v_~xqrᄒkAXY܂h/ZQ}rd2/èbcQ>]{IO_n9*Aw;AmCUk4*;`Ixl vl?a@nP{bۨڜ( '%]nC!U$Vb8?ӛZq1GEnF9raYbFyaF92W]+ (/ w.51L͜$ckE$B3 e:&ҕ9bް{>DJ0Kq:UQ.%=:Ef-*-jB4 m&w-m1_!C ΃4S hRT3;kIEQ俗H,yY/ٙ~$nU[G8x/&l]pM{,M鍼v3?}RpZ21 'WHZx_Og2dրkmU=z#=[=?p~_ hn3JN0{7&`wG$nH L_dk3 ֚oO{\oZL;Ptb>վVƏNjo͞ !fjzMaчׯ\PkR4tSp?_!Om3׬' 11093ms (10:59:15.120) Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[1697639791]: [11.093229993s] [11.093229993s] END Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.120218 4593 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.121402 4593 trace.go:236] Trace[1229695765]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 10:59:02.458) (total time: 12662ms): Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[1229695765]: ---"Objects listed" error: 12662ms (10:59:15.121) Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[1229695765]: [12.662952161s] [12.662952161s] END Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.121433 4593 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.121488 4593 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.121556 4593 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.123101 4593 trace.go:236] Trace[522285009]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 10:59:04.169) (total time: 10953ms): Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[522285009]: ---"Objects listed" error: 10953ms (10:59:15.122) Jan 29 10:59:15 crc kubenswrapper[4593]: Trace[522285009]: [10.953750528s] [10.953750528s] END Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.123123 4593 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 10:59:15 crc kubenswrapper[4593]: E0129 10:59:15.124373 4593 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.130128 4593 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.206619 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.213863 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.256371 4593 csr.go:261] certificate signing request csr-pdwsj is approved, waiting to be issued Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.284275 4593 csr.go:257] certificate signing request csr-pdwsj is issued Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.610679 4593 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36752->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.610758 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:36752->192.168.126.11:17697: read: connection reset by peer" Jan 29 10:59:15 crc kubenswrapper[4593]: I0129 10:59:15.998884 4593 apiserver.go:52] "Watching apiserver" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.002399 4593 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.002778 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.003260 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.003369 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.003483 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.003785 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.003880 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.004087 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.004157 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.004250 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.004462 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.008329 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.008702 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.009156 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.009877 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.010215 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.011914 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.012156 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.012489 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.015996 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.017416 4593 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026705 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026746 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026768 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026783 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026799 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026819 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026834 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026850 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026880 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026974 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.026994 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027010 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027028 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027043 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027058 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027072 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027089 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027103 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027118 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027151 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027169 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027184 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027200 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027215 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027232 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027248 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027264 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027281 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027300 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027316 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027332 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027349 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027381 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027381 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027399 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027418 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027434 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027451 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027472 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027487 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027502 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027518 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027548 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027563 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027582 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027598 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027613 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027677 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027694 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027710 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027725 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027740 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027754 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027769 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027783 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027818 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027834 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027848 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027864 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027878 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027894 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027900 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027909 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027925 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027942 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027957 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027972 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.027987 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028004 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028019 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028034 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028051 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028066 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028081 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028112 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028128 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028143 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028158 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028165 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028175 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028191 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028206 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028223 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028221 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028241 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028259 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028273 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028291 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028307 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028323 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028337 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028353 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028367 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028382 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028398 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028413 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028435 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028451 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028467 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028482 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028497 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028514 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028529 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028545 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028560 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028575 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028592 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028607 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028622 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028652 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028671 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028686 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028704 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028720 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028735 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028749 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028765 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028780 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028795 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028835 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028850 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028881 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028895 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028911 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028927 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028943 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028958 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028973 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028989 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029005 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029021 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029038 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029054 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029071 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029089 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029105 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029121 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029136 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029153 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029168 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029184 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029204 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029221 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029236 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029253 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029269 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029284 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029300 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029315 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029338 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029361 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029384 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029407 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029429 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029451 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029469 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029485 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029501 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029519 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029536 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029552 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029568 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029585 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029602 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029618 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029651 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029669 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029685 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029700 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029716 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029732 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029748 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029767 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029783 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029831 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029849 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029881 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029897 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029915 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029931 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029948 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029966 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029982 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029999 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030015 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030032 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030049 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030066 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030083 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030137 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030177 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030204 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030250 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030279 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.035860 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028409 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.028675 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029092 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029319 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029559 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.029837 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030004 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.030159 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.031809 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:39:51.845570733 +0000 UTC Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.032297 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.532166122 +0000 UTC m=+22.405200313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.032327 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.032407 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.032488 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.032894 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033085 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033263 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033362 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033438 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033473 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033670 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.033744 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034140 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034301 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034558 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034748 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034798 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.034919 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.035344 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.035453 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.035517 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036323 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036500 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036525 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036788 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036799 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.036984 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037041 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037253 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037359 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037479 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037645 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.037994 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.038208 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.038141 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.038385 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.038649 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.039046 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.039086 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.039309 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.039594 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040059 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040228 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040380 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040530 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040579 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040697 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040937 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.040192 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.041364 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047164 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047270 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047330 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047441 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047819 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.047925 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.048333 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.048768 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.052624 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.053771 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.053775 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.053833 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.053877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.057452 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.057650 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.060192 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.060515 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.060756 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061081 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061250 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061289 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061420 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061771 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061875 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.061908 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062081 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062157 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062542 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062585 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.062991 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063171 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063280 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063384 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063295 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063592 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063728 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.063908 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064016 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mkxdt"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064116 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064163 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064261 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.064386 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065088 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065241 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065322 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065321 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065465 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065521 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.065617 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.066403 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.066561 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.067627 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068119 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068336 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068351 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068610 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.068874 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070001 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070147 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070451 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070765 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.070939 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.071030 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.071204 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.071345 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.071307 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072094 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072107 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072093 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072404 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072426 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072536 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072578 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072887 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.072961 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.073091 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.074333 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.074463 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.074569 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.074804 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075041 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075058 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075224 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075398 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075405 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075550 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075813 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.075722 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.076166 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.076369 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.076592 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.076616 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077009 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077131 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077461 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077565 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.077834 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.078447 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.078786 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079024 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079159 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079327 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079689 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.079841 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.080054 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.080065 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.080085 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.080262 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.082236 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.082290 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.082670 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.082772 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.083213 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.084007 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.085614 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.085917 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.086087 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.086219 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.086661 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.086890 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.052786 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087787 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087838 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087868 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087894 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087914 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087937 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087957 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.087979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088005 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088038 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088056 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088073 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088055 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088091 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088181 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088199 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088373 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.088573 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.088609 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.088678 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.588660977 +0000 UTC m=+22.461695168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.089777 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.089801 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.090058 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.590048525 +0000 UTC m=+22.463082716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.090076 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.090581 4593 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.092495 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093455 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093510 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093529 4593 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093542 4593 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093554 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093566 4593 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093579 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093591 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093603 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093614 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.093626 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.108712 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.108909 4593 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.108967 4593 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109037 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109096 4593 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109147 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109202 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109258 4593 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109315 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109371 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109424 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109481 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109537 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109593 4593 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109698 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109759 4593 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109812 4593 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.096533 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109882 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.109994 4593 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110013 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110030 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110044 4593 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110056 4593 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110069 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110081 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110090 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110099 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110107 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110116 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110124 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110139 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110148 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110160 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110171 4593 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110182 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110191 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110202 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110212 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110222 4593 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110233 4593 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110244 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110257 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110269 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110280 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110290 4593 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110304 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110316 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110325 4593 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110334 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110342 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110351 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110359 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110369 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110377 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110387 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110396 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110408 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110420 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110432 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110444 4593 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110456 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110466 4593 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110474 4593 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110482 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110492 4593 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110500 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110508 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110516 4593 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110524 4593 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110532 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110540 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110548 4593 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110556 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110564 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110573 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110581 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110591 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110599 4593 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110608 4593 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110616 4593 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110647 4593 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110656 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110666 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110675 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110683 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110691 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110700 4593 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110708 4593 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110716 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110725 4593 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110733 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110740 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110748 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110756 4593 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110764 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110773 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110783 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110794 4593 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110804 4593 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110814 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110822 4593 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110830 4593 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110837 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110845 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110854 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110863 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110871 4593 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110879 4593 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110888 4593 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110896 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110904 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110912 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110922 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110931 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110940 4593 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110949 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110957 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110965 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110973 4593 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110982 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110990 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.110999 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111007 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111015 4593 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111023 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111032 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111040 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111048 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111056 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111063 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111071 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111080 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111088 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111096 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111104 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111112 4593 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111121 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111129 4593 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111137 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111145 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111153 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111161 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111169 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111177 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111186 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111194 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111203 4593 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111211 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111219 4593 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111227 4593 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111236 4593 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111244 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111252 4593 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111261 4593 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111268 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111276 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111284 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111293 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111302 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.111311 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.096020 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.096312 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.103724 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.105265 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.107006 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.107033 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.107107 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.107186 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.107957 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.111397 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.111411 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.111481 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.611444652 +0000 UTC m=+22.484478843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.095077 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.101263 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.106823 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.108706 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.112046 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.114495 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.118243 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.119059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.120687 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.121916 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.123051 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.123139 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.123575 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.123772 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.123796 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.124225 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.125018 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:16.62434123 +0000 UTC m=+22.497375421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.131891 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.146982 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.147038 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.153029 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.173409 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.207938 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.212538 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213099 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213144 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtz8\" (UniqueName: \"kubernetes.io/projected/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-kube-api-access-gjtz8\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213183 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-hosts-file\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213261 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213313 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213328 4593 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213341 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213351 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213362 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213374 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213386 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213396 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213407 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213418 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213429 4593 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213440 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213452 4593 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213462 4593 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213473 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213485 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213495 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213505 4593 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213516 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213551 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.213779 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.214137 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.227205 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" exitCode=255 Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.227724 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709"} Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.227778 4593 scope.go:117] "RemoveContainer" containerID="47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.240958 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.255571 4593 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.256237 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.282013 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.286388 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 10:54:15 +0000 UTC, rotation deadline is 2026-12-12 08:13:55.378279691 +0000 UTC Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.286448 4593 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7605h14m39.091854056s for next certificate rotation Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.305600 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.314582 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.314686 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjtz8\" (UniqueName: \"kubernetes.io/projected/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-kube-api-access-gjtz8\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.314725 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-hosts-file\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.314827 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-hosts-file\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.316334 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.323056 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.339036 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.341262 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.346759 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjtz8\" (UniqueName: \"kubernetes.io/projected/b36fce0b-62b3-4076-a13e-e6048a4d9a4e-kube-api-access-gjtz8\") pod \"node-resolver-mkxdt\" (UID: \"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\") " pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.353292 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.353745 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.353913 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.357396 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.376381 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.396338 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.420911 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mkxdt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.422123 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.444669 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.467407 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.616931 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.616994 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.617018 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.617036 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617095 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617171 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.617155161 +0000 UTC m=+23.490189352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617440 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617485 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.61747343 +0000 UTC m=+23.490507621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617521 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617533 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617543 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617552 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.617542681 +0000 UTC m=+23.490576872 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.617567 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.617560372 +0000 UTC m=+23.490594563 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.718272 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.718403 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.718417 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.718427 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: E0129 10:59:16.718463 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:17.718451135 +0000 UTC m=+23.591485326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.933401 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xpt4q"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.933810 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xpt4q" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.935407 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zk9np"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.935982 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-p4zf2"] Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.936252 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.936599 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.937600 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.937857 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.938774 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.938915 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.939903 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.940499 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.941346 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.942233 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.942424 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.942581 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.942928 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.946262 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.967400 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.978703 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.986942 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:16 crc kubenswrapper[4593]: I0129 10:59:16.996129 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.003117 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.012764 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.016932 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.022548 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:58:59Z\\\",\\\"message\\\":\\\"W0129 10:58:58.855341 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 10:58:58.855626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769684338 cert, and key in /tmp/serving-cert-1536064180/serving-signer.crt, /tmp/serving-cert-1536064180/serving-signer.key\\\\nI0129 10:58:59.363427 1 observer_polling.go:159] Starting file observer\\\\nW0129 10:58:59.365835 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 10:58:59.366014 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:58:59.368330 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1536064180/tls.crt::/tmp/serving-cert-1536064180/tls.key\\\\\\\"\\\\nF0129 10:58:59.631826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.032038 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.041898 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.055264 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:53:54.938803927 +0000 UTC Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.064429 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.073057 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.082875 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.083521 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.084489 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.085276 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.085942 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.086479 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.087150 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.087785 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.088493 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.089178 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.091142 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.091613 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.092989 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.093931 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.099993 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.100734 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.101301 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.102138 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.102528 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.103170 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.103861 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.104373 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.104964 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.105440 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.106136 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.106535 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.107194 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.110300 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.110880 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.111547 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.112539 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.113058 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.113112 4593 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.113485 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.115792 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.116314 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.116802 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.119190 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.119868 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.120729 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.121348 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.122590 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123066 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-os-release\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123109 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123109 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-k8s-cni-cncf-io\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123288 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-multus\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123310 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-hostroot\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123346 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55q6g\" (UniqueName: \"kubernetes.io/projected/5eed1f11-8e73-4894-965f-a670f6c877b3-kube-api-access-55q6g\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123363 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-cnibin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-socket-dir-parent\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123422 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-netns\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123479 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhqmv\" (UniqueName: \"kubernetes.io/projected/c76afd0b-36c6-4faa-9278-c08c60c483e9-kube-api-access-mhqmv\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-etc-kubernetes\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123564 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7p5\" (UniqueName: \"kubernetes.io/projected/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-kube-api-access-8r7p5\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123583 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123606 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-os-release\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123645 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123703 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123781 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed1f11-8e73-4894-965f-a670f6c877b3-rootfs\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123827 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cnibin\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123857 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-bin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123872 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-conf-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123886 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-daemon-config\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123903 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-cni-binary-copy\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123921 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-kubelet\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123935 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-multus-certs\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123965 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed1f11-8e73-4894-965f-a670f6c877b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.123984 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-system-cni-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.124002 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed1f11-8e73-4894-965f-a670f6c877b3-proxy-tls\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.124020 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.124035 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-system-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.124485 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.125593 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.126559 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.127005 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.127927 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.128391 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.130029 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.130507 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.130696 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.131504 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.132149 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.132962 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.133623 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.134776 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.138834 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.147472 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.158460 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.170091 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47f750a8d01af88118b5ba0f1743bb4357e5eff487d231fdb6962b1a151d898c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:58:59Z\\\",\\\"message\\\":\\\"W0129 10:58:58.855341 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0129 10:58:58.855626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769684338 cert, and key in /tmp/serving-cert-1536064180/serving-signer.crt, /tmp/serving-cert-1536064180/serving-signer.key\\\\nI0129 10:58:59.363427 1 observer_polling.go:159] Starting file observer\\\\nW0129 10:58:59.365835 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0129 10:58:59.366014 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:58:59.368330 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1536064180/tls.crt::/tmp/serving-cert-1536064180/tls.key\\\\\\\"\\\\nF0129 10:58:59.631826 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.178387 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.185270 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.195317 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.204056 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225482 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55q6g\" (UniqueName: \"kubernetes.io/projected/5eed1f11-8e73-4894-965f-a670f6c877b3-kube-api-access-55q6g\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225527 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-cnibin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225550 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-socket-dir-parent\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225579 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-netns\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225600 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhqmv\" (UniqueName: \"kubernetes.io/projected/c76afd0b-36c6-4faa-9278-c08c60c483e9-kube-api-access-mhqmv\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225650 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-cnibin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225663 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-etc-kubernetes\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225695 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r7p5\" (UniqueName: \"kubernetes.io/projected/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-kube-api-access-8r7p5\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225699 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-etc-kubernetes\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225713 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225728 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225743 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225742 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-netns\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225758 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-os-release\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225791 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-socket-dir-parent\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225828 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed1f11-8e73-4894-965f-a670f6c877b3-rootfs\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225850 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-os-release\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225872 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cnibin\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225874 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5eed1f11-8e73-4894-965f-a670f6c877b3-rootfs\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225850 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cnibin\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225940 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.225979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-daemon-config\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226045 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-bin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226077 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-conf-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226092 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-cni-binary-copy\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226106 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-kubelet\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226121 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-multus-certs\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226138 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed1f11-8e73-4894-965f-a670f6c877b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226151 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-system-cni-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226167 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-system-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226183 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed1f11-8e73-4894-965f-a670f6c877b3-proxy-tls\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226198 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226213 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-multus\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226227 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-hostroot\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226241 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-os-release\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226255 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-k8s-cni-cncf-io\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226264 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226295 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-bin\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226298 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-k8s-cni-cncf-io\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-conf-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-system-cni-dir\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226606 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226676 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-run-multus-certs\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226711 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-kubelet\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226738 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-multus-daemon-config\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226742 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-host-var-lib-cni-multus\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c76afd0b-36c6-4faa-9278-c08c60c483e9-cni-binary-copy\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226935 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-system-cni-dir\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.226970 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-hostroot\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.227013 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c76afd0b-36c6-4faa-9278-c08c60c483e9-os-release\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.227257 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-cni-binary-copy\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.227495 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5eed1f11-8e73-4894-965f-a670f6c877b3-mcd-auth-proxy-config\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.230930 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5eed1f11-8e73-4894-965f-a670f6c877b3-proxy-tls\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.232433 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.236047 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.236218 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.236585 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.236648 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.236665 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"733b54284c53dba7cd23ad45db0c26275c95ac566949f4efed0456268a8a20c2"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.238943 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mkxdt" event={"ID":"b36fce0b-62b3-4076-a13e-e6048a4d9a4e","Type":"ContainerStarted","Data":"0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.238969 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mkxdt" event={"ID":"b36fce0b-62b3-4076-a13e-e6048a4d9a4e","Type":"ContainerStarted","Data":"10265cd6a588580a14d990e741ef622df68d39b013bae419362fb8669801ea24"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.240339 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"69f261cd01b221f59b9f0148d4f97e91703379b517b24361eae47b76c3f6abd4"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.241800 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.241841 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ef51da5632e392d63a93a615ba597a7b97d242895b667eea43a587c69774adb4"} Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.246501 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhqmv\" (UniqueName: \"kubernetes.io/projected/c76afd0b-36c6-4faa-9278-c08c60c483e9-kube-api-access-mhqmv\") pod \"multus-xpt4q\" (UID: \"c76afd0b-36c6-4faa-9278-c08c60c483e9\") " pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.247854 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.248131 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xpt4q" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.250423 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55q6g\" (UniqueName: \"kubernetes.io/projected/5eed1f11-8e73-4894-965f-a670f6c877b3-kube-api-access-55q6g\") pod \"machine-config-daemon-p4zf2\" (UID: \"5eed1f11-8e73-4894-965f-a670f6c877b3\") " pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.250469 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r7p5\" (UniqueName: \"kubernetes.io/projected/1bf08558-eb2b-4c00-8494-6f9691a7e3b6-kube-api-access-8r7p5\") pod \"multus-additional-cni-plugins-zk9np\" (UID: \"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\") " pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.254233 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.258898 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.261790 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zk9np" Jan 29 10:59:17 crc kubenswrapper[4593]: W0129 10:59:17.262428 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc76afd0b_36c6_4faa_9278_c08c60c483e9.slice/crio-1504d83bba4a32e82f9d5d28f49062cf7fa579696bbc14a30b8df9d8cecd92bf WatchSource:0}: Error finding container 1504d83bba4a32e82f9d5d28f49062cf7fa579696bbc14a30b8df9d8cecd92bf: Status 404 returned error can't find the container with id 1504d83bba4a32e82f9d5d28f49062cf7fa579696bbc14a30b8df9d8cecd92bf Jan 29 10:59:17 crc kubenswrapper[4593]: W0129 10:59:17.288740 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bf08558_eb2b_4c00_8494_6f9691a7e3b6.slice/crio-49093df79a552ddc90e1fcfbbd12c91c1d57d09ae6494083e3e492caa6cbb919 WatchSource:0}: Error finding container 49093df79a552ddc90e1fcfbbd12c91c1d57d09ae6494083e3e492caa6cbb919: Status 404 returned error can't find the container with id 49093df79a552ddc90e1fcfbbd12c91c1d57d09ae6494083e3e492caa6cbb919 Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.288923 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.304987 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.312966 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vmt7l"] Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.313865 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324471 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324780 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324814 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324931 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.324989 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.325126 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.325240 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.328181 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.350187 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.373260 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.390324 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.403678 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.414142 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.425565 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429379 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429408 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429451 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429465 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429491 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429506 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429527 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429589 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429604 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429646 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429661 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429676 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429691 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429706 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429728 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429745 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429760 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429773 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.429788 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.436394 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.451847 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.463252 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.480912 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.493830 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.510802 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.520669 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530344 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530440 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530726 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530812 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530888 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.530975 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531073 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531161 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531247 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531332 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531403 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531485 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531565 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531650 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531690 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531496 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531724 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531757 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531769 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531778 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531615 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.531862 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532107 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532197 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532296 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532366 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532235 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532340 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532528 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532616 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532733 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532818 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.532319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.533208 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.533319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.533411 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.533512 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.535520 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.556475 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") pod \"ovnkube-node-vmt7l\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.598514 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.618986 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.631448 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.633811 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.633872 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.633942 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.633914204 +0000 UTC m=+25.506948435 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.633955 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.634000 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.634024 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634148 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634179 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.634172401 +0000 UTC m=+25.507206592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634196 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.634190071 +0000 UTC m=+25.507224262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634230 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634243 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634253 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.634284 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.634273194 +0000 UTC m=+25.507307385 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.644432 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.649582 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.657666 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: W0129 10:59:17.660323 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod943b00a1_4aae_4054_b4fd_dc512fe58270.slice/crio-1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7 WatchSource:0}: Error finding container 1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7: Status 404 returned error can't find the container with id 1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7 Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.676374 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.698974 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:17Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:17 crc kubenswrapper[4593]: I0129 10:59:17.734691 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.734812 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.734831 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.734857 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:17 crc kubenswrapper[4593]: E0129 10:59:17.734907 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:19.734891309 +0000 UTC m=+25.607925500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.055998 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:06:00.144772909 +0000 UTC Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.074429 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.074474 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.074435 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:18 crc kubenswrapper[4593]: E0129 10:59:18.074567 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:18 crc kubenswrapper[4593]: E0129 10:59:18.075399 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:18 crc kubenswrapper[4593]: E0129 10:59:18.075612 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.245917 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" exitCode=0 Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.246039 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.246101 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.247378 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.247412 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"1504d83bba4a32e82f9d5d28f49062cf7fa579696bbc14a30b8df9d8cecd92bf"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.249068 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.251110 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.251142 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.251153 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"2cb67a7dc3348ff0e620365865ac008e4766d68d233d0f9b6ae4fe16981dda04"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.254026 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8" exitCode=0 Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.254080 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.254146 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerStarted","Data":"49093df79a552ddc90e1fcfbbd12c91c1d57d09ae6494083e3e492caa6cbb919"} Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.254475 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 10:59:18 crc kubenswrapper[4593]: E0129 10:59:18.254602 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.286757 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.309735 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.323582 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.334423 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.346251 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.359229 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.378540 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.391954 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.406025 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.421999 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.437930 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.447531 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.459449 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.471999 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.481550 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.492501 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.504570 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.516581 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.533060 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.548437 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.566323 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.578733 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.592201 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.604203 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.622500 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:18 crc kubenswrapper[4593]: I0129 10:59:18.642528 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:18Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.056924 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:38:53.855874076 +0000 UTC Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.146921 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-42qv9"] Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.147293 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.149089 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.150264 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.150320 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.150389 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.161218 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.175183 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.192905 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.202873 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.214696 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.229259 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.247049 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.250146 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kd2v\" (UniqueName: \"kubernetes.io/projected/bae5deb1-f488-4080-8a68-215c491015f7-kube-api-access-2kd2v\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.250197 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae5deb1-f488-4080-8a68-215c491015f7-host\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.250222 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae5deb1-f488-4080-8a68-215c491015f7-serviceca\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.258245 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.258291 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.260231 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerStarted","Data":"bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27"} Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.265458 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.283137 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.297496 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.308132 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.319776 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.338519 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.350030 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.351429 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kd2v\" (UniqueName: \"kubernetes.io/projected/bae5deb1-f488-4080-8a68-215c491015f7-kube-api-access-2kd2v\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.351535 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae5deb1-f488-4080-8a68-215c491015f7-host\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.351561 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae5deb1-f488-4080-8a68-215c491015f7-serviceca\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.352141 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bae5deb1-f488-4080-8a68-215c491015f7-host\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.352806 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bae5deb1-f488-4080-8a68-215c491015f7-serviceca\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.362983 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.370532 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kd2v\" (UniqueName: \"kubernetes.io/projected/bae5deb1-f488-4080-8a68-215c491015f7-kube-api-access-2kd2v\") pod \"node-ca-42qv9\" (UID: \"bae5deb1-f488-4080-8a68-215c491015f7\") " pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.374387 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.391833 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.428466 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.467251 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.510115 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-42qv9" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.515190 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: W0129 10:59:19.531522 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae5deb1_f488_4080_8a68_215c491015f7.slice/crio-b9f91f8bb5cb6dc2f93fe94eb835048ca35d34b9901012f2506c8acac05d88b7 WatchSource:0}: Error finding container b9f91f8bb5cb6dc2f93fe94eb835048ca35d34b9901012f2506c8acac05d88b7: Status 404 returned error can't find the container with id b9f91f8bb5cb6dc2f93fe94eb835048ca35d34b9901012f2506c8acac05d88b7 Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.551196 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.593500 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.634231 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.656213 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.656351 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.656329841 +0000 UTC m=+29.529364032 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.656933 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.656967 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.656988 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657089 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657130 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.657121951 +0000 UTC m=+29.530156132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657337 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657357 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657362 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657386 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657401 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.657390368 +0000 UTC m=+29.530424569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.657421 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.65741165 +0000 UTC m=+29.530445841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.672221 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.709884 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.755360 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.758005 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.758190 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.758232 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.758246 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:19 crc kubenswrapper[4593]: E0129 10:59:19.758309 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:23.758291013 +0000 UTC m=+29.631325264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.797895 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:19 crc kubenswrapper[4593]: I0129 10:59:19.837363 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:19Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.058019 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:45:34.69138042 +0000 UTC Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.074111 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.074166 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:20 crc kubenswrapper[4593]: E0129 10:59:20.074256 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:20 crc kubenswrapper[4593]: E0129 10:59:20.074388 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.074491 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:20 crc kubenswrapper[4593]: E0129 10:59:20.074678 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.266358 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27" exitCode=0 Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.266615 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.268091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42qv9" event={"ID":"bae5deb1-f488-4080-8a68-215c491015f7","Type":"ContainerStarted","Data":"b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.268113 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-42qv9" event={"ID":"bae5deb1-f488-4080-8a68-215c491015f7","Type":"ContainerStarted","Data":"b9f91f8bb5cb6dc2f93fe94eb835048ca35d34b9901012f2506c8acac05d88b7"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.271459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.271496 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.271505 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.271515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.284620 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.296719 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.308141 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.319042 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.330342 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.344555 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.357298 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.368536 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.384804 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.394858 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.405894 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.424285 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.439162 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.453037 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.467304 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.480590 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.517183 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.548779 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.590427 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.631001 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.670217 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.708282 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.747968 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.790884 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.829450 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.872108 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.912535 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:20 crc kubenswrapper[4593]: I0129 10:59:20.948406 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:20Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.059325 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:36:16.246493116 +0000 UTC Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.276223 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f" exitCode=0 Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.276265 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f"} Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.295283 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.320787 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.343714 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.356420 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.367130 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.378275 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.388200 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.397824 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.408815 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.420603 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.440027 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.450378 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.469832 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.510487 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.524664 4593 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.526715 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.526746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.526757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.526856 4593 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.541189 4593 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.541456 4593 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542695 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542733 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542744 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.542772 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.560386 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563174 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563206 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563218 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.563228 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.575788 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578744 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578775 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578784 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.578808 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.589897 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593843 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593885 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593898 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.593907 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.604723 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607510 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607559 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.607585 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.617628 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:21Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:21 crc kubenswrapper[4593]: E0129 10:59:21.617771 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618939 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618961 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618983 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.618994 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721805 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721867 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.721886 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825291 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.825332 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927309 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927366 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927382 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:21 crc kubenswrapper[4593]: I0129 10:59:21.927393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:21Z","lastTransitionTime":"2026-01-29T10:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029813 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029873 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029891 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.029907 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.060484 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:25:23.331979727 +0000 UTC Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.073956 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.074015 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:22 crc kubenswrapper[4593]: E0129 10:59:22.074082 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.074026 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:22 crc kubenswrapper[4593]: E0129 10:59:22.074155 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:22 crc kubenswrapper[4593]: E0129 10:59:22.074252 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133215 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133226 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133240 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.133252 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236659 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.236778 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.282504 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7" exitCode=0 Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.282571 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.287862 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.306404 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.324604 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338584 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338596 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.338604 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.344873 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.356450 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.367037 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.377916 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.388578 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.400008 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.412148 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.422804 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.438020 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443197 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443265 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.443298 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.449719 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.462085 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.479311 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:22Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545825 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545864 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545880 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.545889 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.648505 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751783 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751828 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.751870 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854517 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854562 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854572 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854587 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.854597 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:22 crc kubenswrapper[4593]: I0129 10:59:22.956956 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:22Z","lastTransitionTime":"2026-01-29T10:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059408 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059420 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.059444 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.060668 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 06:54:49.501933149 +0000 UTC Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162463 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162477 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.162505 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264479 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264529 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264540 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.264568 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.293486 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d" exitCode=0 Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.293522 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.307061 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.318797 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.330451 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.341555 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.354940 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.367022 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368859 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368931 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368946 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.368955 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.378838 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.393303 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.412338 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.427868 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.439740 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.451835 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.462167 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.470929 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.474792 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:23Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573620 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573681 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573689 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573705 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.573716 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676392 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676426 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676435 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676449 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.676458 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.690973 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.691067 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691115 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.69108688 +0000 UTC m=+37.564121091 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691155 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.691176 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691197 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.691184933 +0000 UTC m=+37.564219124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.691225 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691315 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691388 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.691372409 +0000 UTC m=+37.564406600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691388 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691409 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691420 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.691463 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.691453981 +0000 UTC m=+37.564488252 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778768 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778779 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778795 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.778806 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.792610 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.792801 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.792833 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.792845 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:23 crc kubenswrapper[4593]: E0129 10:59:23.792913 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.792882638 +0000 UTC m=+37.665916829 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881267 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881287 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.881295 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983090 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983159 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:23 crc kubenswrapper[4593]: I0129 10:59:23.983194 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:23Z","lastTransitionTime":"2026-01-29T10:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.061740 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:09:36.832483787 +0000 UTC Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.074018 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.074048 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.074018 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:24 crc kubenswrapper[4593]: E0129 10:59:24.074129 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:24 crc kubenswrapper[4593]: E0129 10:59:24.074195 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:24 crc kubenswrapper[4593]: E0129 10:59:24.074265 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088553 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088597 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088680 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088715 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.088730 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191772 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191784 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.191815 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293686 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293738 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293753 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.293762 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.299114 4593 generic.go:334] "Generic (PLEG): container finished" podID="1bf08558-eb2b-4c00-8494-6f9691a7e3b6" containerID="2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022" exitCode=0 Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.299163 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerDied","Data":"2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.319561 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.333593 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.344179 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.365399 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.374911 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.392500 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396523 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.396535 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.412174 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.425076 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.441771 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.451455 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.463100 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.473622 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.483803 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.493479 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:24Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.499181 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601910 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601943 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601951 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.601973 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.655456 4593 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705285 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.705317 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808167 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808219 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808233 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808251 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.808263 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911088 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911120 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911129 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911143 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:24 crc kubenswrapper[4593]: I0129 10:59:24.911152 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:24Z","lastTransitionTime":"2026-01-29T10:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013604 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013684 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013713 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.013734 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.062935 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:26:00.731915272 +0000 UTC Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.087413 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.101048 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.115022 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.116533 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.136026 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.150482 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.168096 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.184317 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.196031 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.207244 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.218848 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.219071 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.219137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.219205 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.219259 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.221695 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.236313 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.246345 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.256802 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.265099 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.304672 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" event={"ID":"1bf08558-eb2b-4c00-8494-6f9691a7e3b6","Type":"ContainerStarted","Data":"49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.309463 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.309862 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.309913 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.310078 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.317259 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325038 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325359 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325449 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.325700 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.330815 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.340288 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.351101 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.356482 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.356686 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.362440 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.373964 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.388175 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.397541 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.407596 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.415615 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428024 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428202 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428217 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428225 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428238 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.428246 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.444695 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.455666 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.468113 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.478606 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.487483 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.496821 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.506164 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.517319 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.528440 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530362 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530389 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530398 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530412 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.530421 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.540858 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.548971 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.559716 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.568365 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.578749 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.588378 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.602581 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.622916 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:25Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633067 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633252 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633418 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.633532 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737219 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737256 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737267 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737284 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.737295 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839402 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839440 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839449 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839463 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.839472 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.942430 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.942709 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.942823 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.942911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:25 crc kubenswrapper[4593]: I0129 10:59:25.943001 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:25Z","lastTransitionTime":"2026-01-29T10:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045795 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045810 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.045820 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.063957 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:22:43.331688545 +0000 UTC Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.074310 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.074352 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.074456 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:26 crc kubenswrapper[4593]: E0129 10:59:26.074450 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:26 crc kubenswrapper[4593]: E0129 10:59:26.074572 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:26 crc kubenswrapper[4593]: E0129 10:59:26.074742 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147523 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147594 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147609 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.147620 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249322 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249385 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.249397 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.351960 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.351991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.352001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.352018 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.352030 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454456 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454465 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454479 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.454489 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556730 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.556740 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659210 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659255 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659285 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.659296 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762263 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762279 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.762288 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864529 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.864560 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966819 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966897 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:26 crc kubenswrapper[4593]: I0129 10:59:26.966944 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:26Z","lastTransitionTime":"2026-01-29T10:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.065141 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:39:36.181342595 +0000 UTC Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.068961 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.069045 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.069078 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.069109 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.069130 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171086 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171797 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171834 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.171845 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274296 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.274329 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376668 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376678 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.376703 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.478971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.479003 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.479013 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.479026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.479037 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581487 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581533 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581542 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.581565 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683403 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683447 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.683470 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786229 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.786256 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888812 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888851 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888880 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.888892 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990956 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:27 crc kubenswrapper[4593]: I0129 10:59:27.990988 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:27Z","lastTransitionTime":"2026-01-29T10:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.065478 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:03:59.082677398 +0000 UTC Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.074817 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.074917 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:28 crc kubenswrapper[4593]: E0129 10:59:28.075051 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.075078 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:28 crc kubenswrapper[4593]: E0129 10:59:28.075235 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:28 crc kubenswrapper[4593]: E0129 10:59:28.075302 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093181 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093190 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.093212 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195547 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195612 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195622 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195651 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.195661 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298154 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298167 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298191 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.298204 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.319659 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/0.log" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.322588 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0" exitCode=1 Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.322666 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.323778 4593 scope.go:117] "RemoveContainer" containerID="da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.340551 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.355344 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.370034 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.380306 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.394060 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400480 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400508 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400518 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400531 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.400540 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.408217 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.421395 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.430969 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.448157 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.463087 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.476921 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.490918 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502909 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502945 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502959 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.502990 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.508522 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.525506 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:28Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606183 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606192 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606215 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.606225 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708698 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.708734 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810853 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810874 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.810884 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913317 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913342 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:28 crc kubenswrapper[4593]: I0129 10:59:28.913352 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:28Z","lastTransitionTime":"2026-01-29T10:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016476 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.016485 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.026844 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424"] Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.027274 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.029252 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.029453 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.041011 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.054169 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.065982 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:13:39.311043599 +0000 UTC Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.067039 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.086796 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.102983 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.117223 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118833 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118844 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.118874 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.131078 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144202 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144266 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144301 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47b33c04-1415-41d1-9264-1c4b9de87fff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144278 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.144360 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fhqm\" (UniqueName: \"kubernetes.io/projected/47b33c04-1415-41d1-9264-1c4b9de87fff-kube-api-access-8fhqm\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.156856 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.169435 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.181664 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.190668 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.204972 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.219148 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220494 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220507 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.220528 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.230596 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.244914 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fhqm\" (UniqueName: \"kubernetes.io/projected/47b33c04-1415-41d1-9264-1c4b9de87fff-kube-api-access-8fhqm\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.244975 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.245090 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.245127 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47b33c04-1415-41d1-9264-1c4b9de87fff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.245789 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.246345 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47b33c04-1415-41d1-9264-1c4b9de87fff-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.255018 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47b33c04-1415-41d1-9264-1c4b9de87fff-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.274173 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fhqm\" (UniqueName: \"kubernetes.io/projected/47b33c04-1415-41d1-9264-1c4b9de87fff-kube-api-access-8fhqm\") pod \"ovnkube-control-plane-749d76644c-qb424\" (UID: \"47b33c04-1415-41d1-9264-1c4b9de87fff\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328422 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.328459 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.330446 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/1.log" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.331163 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/0.log" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.334033 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb" exitCode=1 Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.334063 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.334092 4593 scope.go:117] "RemoveContainer" containerID="da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.335892 4593 scope.go:117] "RemoveContainer" containerID="bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb" Jan 29 10:59:29 crc kubenswrapper[4593]: E0129 10:59:29.336104 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.340180 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.352393 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.368188 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.389676 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.405988 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.419689 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.431933 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432530 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432545 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.432554 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.442251 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.453204 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.465869 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.477288 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.486432 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.497452 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.509242 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.522213 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.533486 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:29Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534711 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534768 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.534794 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637529 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637567 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637579 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.637650 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739897 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739953 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.739981 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847647 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847687 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847697 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847711 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.847721 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949776 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949839 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:29 crc kubenswrapper[4593]: I0129 10:59:29.949868 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:29Z","lastTransitionTime":"2026-01-29T10:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052169 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052242 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052265 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.052274 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.067001 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:43:38.436055728 +0000 UTC Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.074318 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.074347 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.074372 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.074478 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.074526 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.074576 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.145717 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7jm9m"] Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.146221 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.146292 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154603 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154679 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154694 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.154704 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.166355 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.182291 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.196904 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.208091 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.219475 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.231169 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.241411 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.252410 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.254032 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.254080 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t27pv\" (UniqueName: \"kubernetes.io/projected/7d229804-724c-4e21-89ac-e3369b615389-kube-api-access-t27pv\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257189 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257225 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257235 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257249 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.257258 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.268933 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.281855 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.292558 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.303213 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.312648 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.323565 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.335971 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.338486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" event={"ID":"47b33c04-1415-41d1-9264-1c4b9de87fff","Type":"ContainerStarted","Data":"573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.338524 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" event={"ID":"47b33c04-1415-41d1-9264-1c4b9de87fff","Type":"ContainerStarted","Data":"75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.338536 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" event={"ID":"47b33c04-1415-41d1-9264-1c4b9de87fff","Type":"ContainerStarted","Data":"fd8b7bfa9bdbb54b1d66f2071c1fd2e0fa14dee6b604c8f41f797dca0c4a3987"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.340307 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/1.log" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.352472 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.354797 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.354834 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t27pv\" (UniqueName: \"kubernetes.io/projected/7d229804-724c-4e21-89ac-e3369b615389-kube-api-access-t27pv\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.354901 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.354954 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:30.854940432 +0000 UTC m=+36.727974613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358794 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358819 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358826 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358838 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.358846 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.366203 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.371035 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t27pv\" (UniqueName: \"kubernetes.io/projected/7d229804-724c-4e21-89ac-e3369b615389-kube-api-access-t27pv\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.381383 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.394579 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.407755 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.419671 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.430761 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.440522 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.449724 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460670 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460682 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460700 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.460711 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.461855 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.475710 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.492267 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.502864 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.515749 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.527395 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.538911 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.550447 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:30Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563154 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.563182 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666212 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666251 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666277 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.666288 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769392 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769448 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.769512 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.860228 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.860513 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:30 crc kubenswrapper[4593]: E0129 10:59:30.860606 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:31.860585969 +0000 UTC m=+37.733620170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872293 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.872427 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975774 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975808 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:30 crc kubenswrapper[4593]: I0129 10:59:30.975822 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:30Z","lastTransitionTime":"2026-01-29T10:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.067867 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:24:27.655288231 +0000 UTC Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078839 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078887 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078902 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078921 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.078937 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183387 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183411 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.183424 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287803 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.287849 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390807 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390872 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.390883 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493572 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493587 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.493595 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.595964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.596026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.596038 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.596056 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.596070 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699091 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.699168 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.769379 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.769509 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769618 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769624 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.769570573 +0000 UTC m=+53.642604814 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769713 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.769692426 +0000 UTC m=+53.642726627 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.769738 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.769770 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769886 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769901 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769913 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769929 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.769948 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.769934152 +0000 UTC m=+53.642968363 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.770015 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.769991404 +0000 UTC m=+53.643025645 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.802730 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.803047 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.803063 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.803104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.803117 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.870686 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.870746 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.870897 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.870960 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:33.870942349 +0000 UTC m=+39.743976550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.870993 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.871037 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.871056 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:31 crc kubenswrapper[4593]: E0129 10:59:31.871141 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:47.871117754 +0000 UTC m=+53.744151985 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906030 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906121 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906150 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:31 crc kubenswrapper[4593]: I0129 10:59:31.906161 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:31Z","lastTransitionTime":"2026-01-29T10:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009157 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009600 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009776 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.009889 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.017799 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.017935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.018004 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.018067 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.018127 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.028806 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032748 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.032808 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.044480 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048477 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048526 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048542 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.048588 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.063153 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067713 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067765 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.067810 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.068444 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:07:00.553873919 +0000 UTC Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074060 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074107 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074065 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.074247 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074266 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.074765 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.074796 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.074861 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.075018 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.088856 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101877 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.101891 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.123701 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: E0129 10:59:32.123937 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127233 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127317 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127341 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.127356 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229519 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.229530 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332241 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.332297 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.351318 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.353332 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.353699 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.370598 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.386019 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.400133 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.416736 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.428825 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434824 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434836 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.434871 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.448674 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.463720 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.474722 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.488731 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.501849 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.519306 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536847 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536909 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.536922 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.538734 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.555079 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.573083 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.589605 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.610856 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:32Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639121 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639428 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.639623 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.742580 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845126 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.845156 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947116 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:32 crc kubenswrapper[4593]: I0129 10:59:32.947145 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:32Z","lastTransitionTime":"2026-01-29T10:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.049958 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.050026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.050059 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.050085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.050102 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.069305 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:49:32.656366301 +0000 UTC Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153009 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153079 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153091 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153113 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.153125 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256247 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256301 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256312 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.256341 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358943 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358954 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358973 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.358985 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461131 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461169 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461179 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461197 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.461210 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563218 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563255 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563282 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.563293 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665663 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665715 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665729 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.665756 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768183 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768283 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.768374 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871391 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871418 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871451 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.871492 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.890836 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:33 crc kubenswrapper[4593]: E0129 10:59:33.891013 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:33 crc kubenswrapper[4593]: E0129 10:59:33.891060 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:37.891046633 +0000 UTC m=+43.764080824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975281 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975296 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:33 crc kubenswrapper[4593]: I0129 10:59:33.975329 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:33Z","lastTransitionTime":"2026-01-29T10:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.070158 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:49:14.772969829 +0000 UTC Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.074553 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.074571 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.074627 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.074678 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:34 crc kubenswrapper[4593]: E0129 10:59:34.074773 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:34 crc kubenswrapper[4593]: E0129 10:59:34.074959 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:34 crc kubenswrapper[4593]: E0129 10:59:34.075034 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:34 crc kubenswrapper[4593]: E0129 10:59:34.075111 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078458 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078522 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078544 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.078561 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181081 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181165 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181183 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181210 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.181230 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283477 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283509 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.283521 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386809 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386878 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.386889 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.489985 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.490020 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.490028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.490042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.490050 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.592362 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695311 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695358 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695386 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.695398 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797750 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.797790 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899588 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899600 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:34 crc kubenswrapper[4593]: I0129 10:59:34.899624 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:34Z","lastTransitionTime":"2026-01-29T10:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002846 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002910 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.002921 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.070719 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 00:18:20.710425674 +0000 UTC Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.087757 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.102104 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104615 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104660 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104688 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.104705 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.117040 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.129348 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.144038 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.156208 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.174413 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.188469 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208225 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208679 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208692 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.208719 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.222087 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.233057 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.242910 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.253754 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.266572 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.280154 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.300835 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da8e34eb56377e17ccba577d6fb9126cfb4d73d1e821ea70de3089d83bbbb8c0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:28Z\\\",\\\"message\\\":\\\"opping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.043823 5788 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 10:59:28.044108 5788 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044208 5788 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 10:59:28.044719 5788 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 10:59:28.044740 5788 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 10:59:28.044764 5788 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0129 10:59:28.044773 5788 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 10:59:28.044795 5788 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 10:59:28.044801 5788 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0129 10:59:28.044809 5788 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0129 10:59:28.044824 5788 factory.go:656] Stopping watch factory\\\\nI0129 10:59:28.044835 5788 ovnkube.go:599] Stopped ovnkube\\\\nI0129 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:35Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311591 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311621 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311648 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.311676 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414379 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414425 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414436 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.414466 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517412 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.517453 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619960 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619987 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.619996 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.722961 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.723005 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.723013 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.723027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.723036 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826545 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826553 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826567 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.826576 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930100 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930156 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930172 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:35 crc kubenswrapper[4593]: I0129 10:59:35.930181 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:35Z","lastTransitionTime":"2026-01-29T10:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033718 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.033744 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.071777 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:38:04.947927552 +0000 UTC Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.074108 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.074130 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:36 crc kubenswrapper[4593]: E0129 10:59:36.074294 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.074770 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:36 crc kubenswrapper[4593]: E0129 10:59:36.074875 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.074931 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:36 crc kubenswrapper[4593]: E0129 10:59:36.075007 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:36 crc kubenswrapper[4593]: E0129 10:59:36.075131 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137037 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137094 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137113 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.137127 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.239877 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.239974 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.239990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.240016 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.240042 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343129 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.343215 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445380 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.445392 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548568 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548654 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548678 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.548691 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651366 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651427 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651444 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.651484 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753421 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753431 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753445 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.753454 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856490 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856552 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856573 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856600 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.856671 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959087 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959167 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:36 crc kubenswrapper[4593]: I0129 10:59:36.959181 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:36Z","lastTransitionTime":"2026-01-29T10:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061045 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061093 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061109 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.061119 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.072684 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:43:23.583581723 +0000 UTC Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164345 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164355 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.164381 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266887 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.266916 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369805 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.369887 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471676 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471711 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471721 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471733 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.471742 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574527 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574559 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574583 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.574592 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677094 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677132 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.677157 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779378 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779436 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779456 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779480 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.779495 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882311 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882333 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.882341 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.931154 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:37 crc kubenswrapper[4593]: E0129 10:59:37.931404 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:37 crc kubenswrapper[4593]: E0129 10:59:37.931488 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 10:59:45.931466846 +0000 UTC m=+51.804501047 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.984963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.984999 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.985008 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.985021 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:37 crc kubenswrapper[4593]: I0129 10:59:37.985030 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:37Z","lastTransitionTime":"2026-01-29T10:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.073756 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:02:17.621483792 +0000 UTC Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.074385 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.074495 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:38 crc kubenswrapper[4593]: E0129 10:59:38.074587 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:38 crc kubenswrapper[4593]: E0129 10:59:38.074719 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.074461 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:38 crc kubenswrapper[4593]: E0129 10:59:38.074884 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.074784 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:38 crc kubenswrapper[4593]: E0129 10:59:38.075144 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087719 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087769 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087783 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.087793 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190365 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190382 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.190393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.292988 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.293032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.293046 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.293065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.293084 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.395939 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.396000 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.396015 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.396034 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.396050 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498490 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498530 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498541 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498555 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.498567 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601188 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601214 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.601274 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705021 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705078 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.705089 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808516 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808695 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808727 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.808748 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912157 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:38 crc kubenswrapper[4593]: I0129 10:59:38.912175 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:38Z","lastTransitionTime":"2026-01-29T10:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015727 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.015846 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.074267 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 18:46:12.783600905 +0000 UTC Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.117991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.118079 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.118100 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.118129 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.118150 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221230 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221241 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221254 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.221264 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324049 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324100 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324113 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324131 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.324142 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427118 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427163 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.427208 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.530450 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.530753 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.530837 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.530934 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.531055 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633548 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633560 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.633586 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.737062 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.737516 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.737739 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.737884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.738020 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840881 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840897 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.840909 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943457 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943728 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943813 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:39 crc kubenswrapper[4593]: I0129 10:59:39.943973 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:39Z","lastTransitionTime":"2026-01-29T10:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046400 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046423 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.046435 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074329 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074387 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:40 crc kubenswrapper[4593]: E0129 10:59:40.074463 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074481 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:40 crc kubenswrapper[4593]: E0129 10:59:40.074657 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074340 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.074812 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:03:01.973864181 +0000 UTC Jan 29 10:59:40 crc kubenswrapper[4593]: E0129 10:59:40.074871 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:40 crc kubenswrapper[4593]: E0129 10:59:40.074914 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148662 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148720 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148737 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148759 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.148774 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.252601 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.355506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.355781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.355932 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.356063 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.356175 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459758 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459808 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.459863 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.563202 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.563620 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.563852 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.564012 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.564158 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.667883 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.667946 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.667962 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.667986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.668005 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770526 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770576 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770593 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.770605 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873730 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873755 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.873807 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976874 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976917 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976941 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:40 crc kubenswrapper[4593]: I0129 10:59:40.976952 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:40Z","lastTransitionTime":"2026-01-29T10:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.075162 4593 scope.go:117] "RemoveContainer" containerID="bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.075302 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:20:52.718623952 +0000 UTC Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079458 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079478 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079493 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.079503 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.088209 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.105026 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.118882 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.130472 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.141930 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.153665 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.169836 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.182935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.182969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.182983 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.183001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.183012 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.184983 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.203692 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.218180 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.234912 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.248856 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.259797 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.273599 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284895 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284933 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284966 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.284976 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.288767 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.311573 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.386701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.386982 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.387067 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.387145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.387236 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.387461 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/1.log" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.389219 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.390028 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.408774 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.422459 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.440469 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.456311 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.466138 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.476732 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.485520 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488924 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488933 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488946 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.488956 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.497475 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.515827 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.528028 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.541826 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.554720 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.566419 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.578689 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.590965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.591005 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.591018 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.591037 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.591049 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.596999 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.608062 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:41Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693190 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.693221 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795322 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795493 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.795590 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897953 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897962 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:41 crc kubenswrapper[4593]: I0129 10:59:41.897985 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:41Z","lastTransitionTime":"2026-01-29T10:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000576 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000644 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.000678 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.074149 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.074211 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.074159 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.074154 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.074303 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.074380 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.074453 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.074583 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.076385 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:50:46.244284219 +0000 UTC Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.103392 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205866 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205946 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.205960 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308897 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308930 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.308987 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.396370 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/2.log" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.397671 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/1.log" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.401748 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" exitCode=1 Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.401803 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.401840 4593 scope.go:117] "RemoveContainer" containerID="bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.402823 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.403176 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.411742 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.411893 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.411979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.412071 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.412150 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.420134 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424806 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424815 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424828 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.424836 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.435667 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.437333 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440775 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440880 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440895 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.440904 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.449250 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.460315 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.468763 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470612 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470721 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.470798 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.481898 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.484893 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488507 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.488530 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.497399 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.499826 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503081 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503229 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503417 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.503508 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.508325 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.513983 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: E0129 10:59:42.514140 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515748 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.515790 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.519285 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.531347 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.545338 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.561741 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bba38c77b223090153cccd8bb9a1ef0a2fcce51cf5b84ffd9477af5f022fddcb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"message\\\":\\\"twork_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0129 10:59:29.059862 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-ingress/router-internal-default\\\\\\\"}\\\\nI0129 10:59:29.059871 5934 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0129 10:59:29.059877 5934 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:29.059892 5934 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nI0129 10:59:29.059909 5934 services_controller.go:360] Finished syncing service olm-operator-metrics on namespace openshift-operator-lifecycle-manager for network=default : 1.238243ms\\\\nI0129 10:59:29.059919 5934 services_controller.go:356] Processing sync for service openshift-service-ca-operator/metrics for network=default\\\\nF0129 10:59:29.059935 5934 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.574436 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.586574 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.598721 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.607939 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617311 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:42Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617920 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617954 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617984 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.617996 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719825 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719883 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719900 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719923 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.719940 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823755 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823899 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.823991 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927073 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927372 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927511 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927609 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:42 crc kubenswrapper[4593]: I0129 10:59:42.927753 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:42Z","lastTransitionTime":"2026-01-29T10:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030212 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030272 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030285 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.030316 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.076621 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:22:33.883238222 +0000 UTC Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132264 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132309 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.132352 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.233986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.234237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.234348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.234456 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.234552 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337172 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337217 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337228 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.337256 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.406746 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/2.log" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.410929 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 10:59:43 crc kubenswrapper[4593]: E0129 10:59:43.411352 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.422651 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.439364 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440313 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440351 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440360 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.440384 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.454492 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.467232 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.484227 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.496332 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.508089 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.518658 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.529508 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.542976 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.543028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.543039 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.543054 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.543066 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.544252 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.564973 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.577413 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.590895 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.606814 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.616770 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.627100 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:43Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644624 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644683 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.644712 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747625 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747687 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747697 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.747731 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.850432 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.850754 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.850854 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.850942 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.851026 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953529 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:43 crc kubenswrapper[4593]: I0129 10:59:43.953540 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:43Z","lastTransitionTime":"2026-01-29T10:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056143 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056150 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056164 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.056172 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.074868 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.074912 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.074943 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.074875 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:44 crc kubenswrapper[4593]: E0129 10:59:44.075014 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:44 crc kubenswrapper[4593]: E0129 10:59:44.075071 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:44 crc kubenswrapper[4593]: E0129 10:59:44.075116 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:44 crc kubenswrapper[4593]: E0129 10:59:44.075154 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.076919 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:02:25.713674191 +0000 UTC Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.158997 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.159055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.159070 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.159090 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.159107 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261267 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261332 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.261399 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364780 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.364794 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466860 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466906 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466921 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.466933 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569080 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569118 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.569153 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671156 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671206 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.671234 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774435 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774830 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774925 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.774998 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878153 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878388 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878470 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878581 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.878703 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.980882 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.980959 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.980983 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.981017 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:44 crc kubenswrapper[4593]: I0129 10:59:44.981041 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:44Z","lastTransitionTime":"2026-01-29T10:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.077047 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:10:54.175524064 +0000 UTC Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083301 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083331 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.083358 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.093576 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.115842 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.177134 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.185056 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.185512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.185810 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.185990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.186191 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.196823 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.215076 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.233068 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.245877 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.258413 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.273543 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.286358 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288325 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288355 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288366 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288381 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.288393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.298187 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.306128 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.313477 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.324315 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.339542 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.350170 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.360112 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.370121 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.379469 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390063 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390252 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390351 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.390504 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.392740 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.404210 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.415174 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.427995 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.437458 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.447961 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.456291 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.467664 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.476739 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.490155 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492705 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492733 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.492755 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.503964 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.516181 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.532624 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.551405 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:45Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601052 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601116 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601154 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.601166 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.950559 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:45 crc kubenswrapper[4593]: E0129 10:59:45.950757 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:45 crc kubenswrapper[4593]: E0129 10:59:45.950807 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:01.950792344 +0000 UTC m=+67.823826535 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952524 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:45 crc kubenswrapper[4593]: I0129 10:59:45.952536 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:45Z","lastTransitionTime":"2026-01-29T10:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054400 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054436 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.054471 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.073907 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.074007 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.074015 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:46 crc kubenswrapper[4593]: E0129 10:59:46.074418 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.074091 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:46 crc kubenswrapper[4593]: E0129 10:59:46.074526 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:46 crc kubenswrapper[4593]: E0129 10:59:46.074256 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:46 crc kubenswrapper[4593]: E0129 10:59:46.074612 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.077140 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:29:12.001304829 +0000 UTC Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156606 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156717 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156751 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.156764 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259616 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259694 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.259737 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362054 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362075 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.362089 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464590 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464651 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464663 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464680 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.464691 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566925 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566973 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566988 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.566998 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670698 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670706 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670720 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.670729 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.772915 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.772965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.773037 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.773072 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.773089 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876783 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876828 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.876865 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979427 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979479 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:46 crc kubenswrapper[4593]: I0129 10:59:46.979506 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:46Z","lastTransitionTime":"2026-01-29T10:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.078243 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:20:37.186400964 +0000 UTC Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.080861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.080988 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.081102 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.081203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.081261 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183288 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.183302 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285183 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285232 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285246 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285300 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.285312 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387476 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387510 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.387524 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490706 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490740 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490763 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.490772 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594192 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.594307 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696516 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696548 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696565 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.696576 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799867 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799889 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.799899 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.867717 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.867912 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.867870447 +0000 UTC m=+85.740904648 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.868020 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.868064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.868098 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868192 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868234 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.868222966 +0000 UTC m=+85.741257167 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868451 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868486 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.868475853 +0000 UTC m=+85.741510054 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868609 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868656 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868670 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.868720 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.868707219 +0000 UTC m=+85.741741430 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902873 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902934 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.902945 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:47Z","lastTransitionTime":"2026-01-29T10:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:47 crc kubenswrapper[4593]: I0129 10:59:47.968906 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.969073 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.969094 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.969108 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:47 crc kubenswrapper[4593]: E0129 10:59:47.969165 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:19.96915007 +0000 UTC m=+85.842184271 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005588 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005598 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.005623 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.073836 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:48 crc kubenswrapper[4593]: E0129 10:59:48.074220 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.073927 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:48 crc kubenswrapper[4593]: E0129 10:59:48.074438 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.073880 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:48 crc kubenswrapper[4593]: E0129 10:59:48.074684 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.073944 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:48 crc kubenswrapper[4593]: E0129 10:59:48.074865 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.078573 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:29:48.979832326 +0000 UTC Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108392 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.108408 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211682 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211692 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.211719 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313402 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313416 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.313427 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.415986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.416020 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.416033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.416052 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.416063 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.519557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.520173 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.520270 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.520340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.520416 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622604 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622655 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.622668 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725242 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725256 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.725265 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.827773 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930841 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930867 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:48 crc kubenswrapper[4593]: I0129 10:59:48.930879 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:48Z","lastTransitionTime":"2026-01-29T10:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033480 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.033490 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.079705 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 20:02:41.958062614 +0000 UTC Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135415 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135457 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135485 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.135499 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237849 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237863 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.237874 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340080 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340163 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340176 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340194 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.340206 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.442363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.442755 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.442843 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.442926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.443003 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544790 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544825 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544835 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544849 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.544860 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647727 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647772 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647784 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647799 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.647809 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750012 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750072 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.750082 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852688 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852795 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852816 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.852827 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.959938 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.960017 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.960145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.960173 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:49 crc kubenswrapper[4593]: I0129 10:59:49.960188 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:49Z","lastTransitionTime":"2026-01-29T10:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062814 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.062840 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.073824 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.073829 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:50 crc kubenswrapper[4593]: E0129 10:59:50.073961 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:50 crc kubenswrapper[4593]: E0129 10:59:50.074007 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.074281 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.074345 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:50 crc kubenswrapper[4593]: E0129 10:59:50.074529 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:50 crc kubenswrapper[4593]: E0129 10:59:50.074529 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.079811 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:13:36.407338689 +0000 UTC Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165102 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165182 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.165198 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268008 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268070 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.268124 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.371337 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474040 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474069 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474077 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.474113 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576201 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.576277 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678242 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678256 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678271 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.678294 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780846 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780885 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780913 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.780924 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.882903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.883266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.883419 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.883499 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.883569 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986504 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:50 crc kubenswrapper[4593]: I0129 10:59:50.986517 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:50Z","lastTransitionTime":"2026-01-29T10:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.080609 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:16:54.95095111 +0000 UTC Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088890 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088902 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.088928 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.191989 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.192032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.192044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.192062 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.192074 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.249715 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.260589 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.265081 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.277097 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.288197 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294792 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294808 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.294819 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.301345 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.314150 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.326099 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.339024 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.349747 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.363853 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.373329 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.384248 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.393937 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397101 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397126 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.397157 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.405332 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.426512 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.437284 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.449824 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:51Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.499243 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601669 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601733 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601755 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.601771 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.703991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.704027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.704035 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.704049 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.704060 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.806995 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.807032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.807044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.807066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.807079 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909442 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909476 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909484 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:51 crc kubenswrapper[4593]: I0129 10:59:51.909506 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:51Z","lastTransitionTime":"2026-01-29T10:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011386 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011448 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.011482 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.074166 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.074225 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.074262 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.074403 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.074570 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.075085 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.075253 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.075361 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.080875 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:01:26.935938047 +0000 UTC Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114445 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114469 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.114501 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.217827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.217906 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.217933 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.217969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.218007 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320814 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320851 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.320866 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424694 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424750 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.424794 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527763 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527817 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527833 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.527872 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.631322 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734212 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734248 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.734287 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837302 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837362 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.837372 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891260 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891278 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.891290 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.905936 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910075 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910113 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910139 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.910151 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.922418 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927119 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927132 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927151 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.927165 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.941238 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944469 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944520 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.944574 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.958554 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962186 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962202 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.962214 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.982027 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:52Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:52 crc kubenswrapper[4593]: E0129 10:59:52.982258 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984514 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984568 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984587 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984618 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:52 crc kubenswrapper[4593]: I0129 10:59:52.984717 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:52Z","lastTransitionTime":"2026-01-29T10:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.081242 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 04:07:43.082000953 +0000 UTC Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086909 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086951 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086960 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.086984 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190214 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190230 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.190242 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292781 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292811 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292820 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292833 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.292843 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396236 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396281 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396293 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396308 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.396319 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498657 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.498734 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600837 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600849 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600864 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.600875 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703881 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.703921 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806750 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806758 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806771 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.806780 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.908936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.909027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.909043 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.909068 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:53 crc kubenswrapper[4593]: I0129 10:59:53.909081 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:53Z","lastTransitionTime":"2026-01-29T10:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012313 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.012361 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.074139 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.074279 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.074592 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.074670 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.074709 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.074746 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.075296 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.075418 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.075464 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:54 crc kubenswrapper[4593]: E0129 10:59:54.075522 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.081776 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:35:19.863740224 +0000 UTC Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114235 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114286 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.114297 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217054 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217119 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217138 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.217151 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319395 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319422 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319431 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.319452 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421596 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421657 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421674 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421691 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.421704 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525076 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525101 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525109 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525121 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.525129 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627175 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627249 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627279 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.627321 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730144 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730152 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730166 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.730175 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832625 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832663 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832685 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.832702 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934959 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:54 crc kubenswrapper[4593]: I0129 10:59:54.934972 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:54Z","lastTransitionTime":"2026-01-29T10:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037119 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037135 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.037143 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.081898 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 18:24:29.760480281 +0000 UTC Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.089593 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.103856 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.119350 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.129275 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.139386 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.139621 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.139770 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.139920 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.140072 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.140187 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.151403 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.162624 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.172817 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.186665 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.197498 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.209221 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.219086 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.231584 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.240427 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242122 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242150 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242179 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.242191 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.250722 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.259318 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.267722 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T10:59:55Z is after 2025-08-24T17:21:41Z" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343916 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343933 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343956 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.343974 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446232 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.446245 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.548922 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.548963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.548975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.548990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.549001 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.651861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.652266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.652393 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.652495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.652677 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.755619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.755908 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.755986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.756058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.756122 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859427 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859480 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859511 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.859526 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962021 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962230 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962387 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:55 crc kubenswrapper[4593]: I0129 10:59:55.962439 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:55Z","lastTransitionTime":"2026-01-29T10:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065337 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065365 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.065385 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.074514 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.074547 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:56 crc kubenswrapper[4593]: E0129 10:59:56.074619 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.074661 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.074725 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:56 crc kubenswrapper[4593]: E0129 10:59:56.074720 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:56 crc kubenswrapper[4593]: E0129 10:59:56.074838 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:56 crc kubenswrapper[4593]: E0129 10:59:56.074937 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.082668 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:37:50.21572526 +0000 UTC Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168187 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168200 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168217 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.168229 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270419 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270472 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270488 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.270499 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372924 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372955 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.372989 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475584 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475593 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475608 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.475619 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.577945 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.577981 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.577993 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.578008 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.578019 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.680613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.680845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.680963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.681063 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.681141 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.783936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.784208 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.784268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.784327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.784395 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887527 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887594 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.887668 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990490 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990547 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:56 crc kubenswrapper[4593]: I0129 10:59:56.990574 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:56Z","lastTransitionTime":"2026-01-29T10:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.083052 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:57:42.839292066 +0000 UTC Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.092488 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194654 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194701 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194738 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.194756 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.296717 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.297133 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.297213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.297292 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.297376 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400332 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.400359 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502820 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502888 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502905 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.502916 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605860 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605929 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.605938 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.707692 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.707935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.708001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.708073 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.708173 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810293 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810341 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810358 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.810372 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913125 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913154 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913161 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913174 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:57 crc kubenswrapper[4593]: I0129 10:59:57.913184 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:57Z","lastTransitionTime":"2026-01-29T10:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.015999 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.016033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.016043 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.016056 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.016065 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.074657 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.074691 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 10:59:58 crc kubenswrapper[4593]: E0129 10:59:58.074780 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 10:59:58 crc kubenswrapper[4593]: E0129 10:59:58.074936 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.074965 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 10:59:58 crc kubenswrapper[4593]: E0129 10:59:58.075009 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.074948 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 10:59:58 crc kubenswrapper[4593]: E0129 10:59:58.075064 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.084221 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:02:10.869909893 +0000 UTC Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.117977 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.118023 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.118034 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.118050 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.118061 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220535 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220576 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220595 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.220613 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322590 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322642 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322653 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322676 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.322687 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.424948 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.424990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.425001 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.425017 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.425029 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527504 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527521 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.527533 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630059 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630785 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.630994 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732826 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732863 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.732884 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834814 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834848 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834875 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.834886 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937240 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937282 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937296 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:58 crc kubenswrapper[4593]: I0129 10:59:58.937307 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:58Z","lastTransitionTime":"2026-01-29T10:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039201 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.039214 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.084825 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:50:42.217005253 +0000 UTC Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141350 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141366 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.141378 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243929 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243940 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243954 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.243964 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346334 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.346363 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448343 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448647 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448826 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.448899 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551384 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551394 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.551421 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654084 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654125 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654133 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.654158 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756559 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.756569 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859440 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859483 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859515 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.859527 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962249 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962271 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 10:59:59 crc kubenswrapper[4593]: I0129 10:59:59.962319 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T10:59:59Z","lastTransitionTime":"2026-01-29T10:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064284 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064334 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064350 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.064360 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.074009 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:00 crc kubenswrapper[4593]: E0129 11:00:00.074146 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.074404 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:00 crc kubenswrapper[4593]: E0129 11:00:00.074480 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.074651 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:00 crc kubenswrapper[4593]: E0129 11:00:00.074733 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.074817 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:00 crc kubenswrapper[4593]: E0129 11:00:00.074935 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.085758 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:10:21.633505029 +0000 UTC Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166519 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166590 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.166601 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270030 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270075 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270092 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.270120 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372404 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372425 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.372433 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.474618 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.474882 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.474966 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.475128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.475223 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.577926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.577978 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.577990 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.578006 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.578015 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680456 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680522 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680534 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680551 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.680573 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783177 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.783285 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885372 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885416 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885431 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.885439 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987771 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987821 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987859 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:00 crc kubenswrapper[4593]: I0129 11:00:00.987869 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:00Z","lastTransitionTime":"2026-01-29T11:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.085900 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:14:24.981392472 +0000 UTC Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089567 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089603 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089647 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.089662 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192444 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192498 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.192524 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.294944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.294998 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.295009 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.295027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.295038 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.397381 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.499957 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601604 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601691 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601709 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.601720 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704713 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.704751 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807382 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807438 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807450 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.807477 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909684 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909739 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909756 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909779 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.909796 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:01Z","lastTransitionTime":"2026-01-29T11:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:01 crc kubenswrapper[4593]: I0129 11:00:01.991502 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:01 crc kubenswrapper[4593]: E0129 11:00:01.991660 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:00:01 crc kubenswrapper[4593]: E0129 11:00:01.991736 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 11:00:33.991713607 +0000 UTC m=+99.864747868 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011763 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011777 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011799 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.011812 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.074356 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.074358 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:02 crc kubenswrapper[4593]: E0129 11:00:02.074805 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.074453 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:02 crc kubenswrapper[4593]: E0129 11:00:02.075109 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.074378 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:02 crc kubenswrapper[4593]: E0129 11:00:02.075282 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:02 crc kubenswrapper[4593]: E0129 11:00:02.074935 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.086663 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 15:33:14.312967784 +0000 UTC Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114593 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114655 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114686 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.114699 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216798 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216839 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216854 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.216866 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319438 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.319449 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421826 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421865 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421890 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.421899 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524139 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524186 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524195 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524212 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.524221 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626696 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626770 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626782 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.626807 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729010 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729045 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729056 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729071 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.729082 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.831994 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.832237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.832357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.832443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.832516 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935155 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935169 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:02 crc kubenswrapper[4593]: I0129 11:00:02.935196 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:02Z","lastTransitionTime":"2026-01-29T11:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037392 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037430 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037440 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.037462 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.087878 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:39:15.322904422 +0000 UTC Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140020 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140053 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140081 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.140093 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242160 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242178 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.242190 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344705 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344744 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.344766 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345594 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345670 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345680 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.345689 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.356910 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360271 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360283 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.360310 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.371493 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377389 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377426 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377454 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.377465 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.388541 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391233 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391387 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391450 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.391503 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.403730 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.406970 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.407082 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.407162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.407242 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.407312 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.420027 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:03Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:03 crc kubenswrapper[4593]: E0129 11:00:03.420487 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447892 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.447929 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.550557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.550832 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.550932 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.551027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.551113 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654483 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654494 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.654522 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756522 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756555 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.756585 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859199 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.859268 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961504 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961550 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961561 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:03 crc kubenswrapper[4593]: I0129 11:00:03.961591 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:03Z","lastTransitionTime":"2026-01-29T11:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064009 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064064 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064076 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.064130 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.074486 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.074516 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.074601 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:04 crc kubenswrapper[4593]: E0129 11:00:04.074721 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.074768 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:04 crc kubenswrapper[4593]: E0129 11:00:04.074863 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:04 crc kubenswrapper[4593]: E0129 11:00:04.074947 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:04 crc kubenswrapper[4593]: E0129 11:00:04.074987 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.088174 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 02:39:43.465693402 +0000 UTC Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166750 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166762 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.166770 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269432 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269475 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269488 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.269499 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.371449 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.371725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.371819 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.371945 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.372035 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474583 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474706 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474818 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.474891 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577648 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577684 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.577717 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680389 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680425 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680436 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.680463 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782916 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782957 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782982 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.782995 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885362 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885385 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.885393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987773 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:04 crc kubenswrapper[4593]: I0129 11:00:04.987784 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:04Z","lastTransitionTime":"2026-01-29T11:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.074700 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.088467 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:32:10.878571974 +0000 UTC Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.088459 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089672 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089742 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.089753 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.102767 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.113010 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.123256 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.138061 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.150544 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.169249 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.181447 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191921 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191951 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191959 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191973 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.191981 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.194226 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.204966 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.215481 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.226778 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.239097 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.250177 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.268726 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.292838 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296626 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296668 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296678 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296693 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.296703 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.314248 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399188 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399229 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399238 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399253 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.399263 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.479492 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/2.log" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.481817 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.482482 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.483252 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/0.log" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.483302 4593 generic.go:334] "Generic (PLEG): container finished" podID="c76afd0b-36c6-4faa-9278-c08c60c483e9" containerID="c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08" exitCode=1 Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.483340 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerDied","Data":"c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.483757 4593 scope.go:117] "RemoveContainer" containerID="c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.500679 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501404 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501439 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501447 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.501470 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.521655 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.548798 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.568192 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.587268 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603425 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603484 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603515 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.603526 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.608145 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.621216 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.632505 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.643837 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.664619 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.688118 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.702455 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706218 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706254 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706264 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706280 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.706291 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.718593 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.730454 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.744715 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.756828 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.767958 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.806487 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808800 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808812 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808837 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.808850 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.835619 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.851649 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.869589 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.883561 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.900492 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911463 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.911502 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:05Z","lastTransitionTime":"2026-01-29T11:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.922167 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.939332 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.962548 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:05 crc kubenswrapper[4593]: I0129 11:00:05.987327 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:05Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.004847 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013533 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013607 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.013618 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.024002 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.041347 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.053450 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.070936 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.073921 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.074092 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.073929 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.074312 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.073927 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.074506 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.073979 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.074718 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.082841 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.088838 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:52:38.529229672 +0000 UTC Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.091880 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115421 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115451 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115474 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.115482 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219383 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219423 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.219450 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321519 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321585 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.321597 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424069 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424079 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424122 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.424133 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.488198 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.488846 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/2.log" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.491193 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" exitCode=1 Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.491232 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.491277 4593 scope.go:117] "RemoveContainer" containerID="b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.492071 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:06 crc kubenswrapper[4593]: E0129 11:00:06.492242 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.494189 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/0.log" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.494287 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.504654 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.518094 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526805 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.526909 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.537602 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.549240 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.561947 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.577294 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.587838 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.597237 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.608570 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.622698 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628759 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628794 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628805 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628821 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.628831 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.636076 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.646521 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.659318 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.669109 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.681724 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.692403 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.705102 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.718897 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731159 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731194 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731205 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.731233 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.733825 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.750540 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b756f4413823e3d028f193f0a51f1a16e85afb24afa830b3159de2c79de66607\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T10:59:41Z\\\",\\\"message\\\":\\\"rafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0129 10:59:41.796501 6142 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0129 10:59:41.794423 6142 services_controller.go:360] Finished syncing service downloads on namespace openshift-console for network=default : 2.278542ms\\\\nI0129 10:59:41.798050 6142 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 10:59:41.798234 6142 services_controller.go:356] Processing sync for service openshift-console-operator/metrics for network=default\\\\nF0129 10:59:41.798242 6142 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.760690 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.774440 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.787129 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.799457 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.810795 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.820985 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834283 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834487 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834612 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.834684 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.845285 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.871284 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.905656 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.919591 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.933554 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937060 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937070 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.937096 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:06Z","lastTransitionTime":"2026-01-29T11:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.944446 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.953284 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:06 crc kubenswrapper[4593]: I0129 11:00:06.962347 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:06Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039601 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.039622 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.089585 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:07:27.785932388 +0000 UTC Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.141763 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.141984 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.142110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.142177 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.142233 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.244446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.244757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.244960 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.245120 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.245290 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347048 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347283 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347369 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347430 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.347491 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.449856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.450055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.450158 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.450250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.450332 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.499232 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.502389 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:07 crc kubenswrapper[4593]: E0129 11:00:07.502624 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.514042 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.525720 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.535002 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.546101 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552427 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552465 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552478 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552491 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.552500 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.556835 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.572501 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.584716 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.594812 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.604851 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.618659 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.640994 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.652702 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656673 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656747 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.656762 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.668008 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.682952 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.695951 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.709328 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.720720 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:07Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758286 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758296 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758309 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.758319 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860947 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860956 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860969 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.860979 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963373 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963402 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963410 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963422 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:07 crc kubenswrapper[4593]: I0129 11:00:07.963431 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:07Z","lastTransitionTime":"2026-01-29T11:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066475 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.066484 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.074286 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.074318 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.074286 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:08 crc kubenswrapper[4593]: E0129 11:00:08.074373 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.074393 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:08 crc kubenswrapper[4593]: E0129 11:00:08.074465 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:08 crc kubenswrapper[4593]: E0129 11:00:08.074511 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:08 crc kubenswrapper[4593]: E0129 11:00:08.074555 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.089873 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:41:37.847840364 +0000 UTC Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168786 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168794 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168806 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.168815 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270662 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270697 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270714 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.270749 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.372986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.373025 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.373038 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.373055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.373067 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475080 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475130 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475164 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.475178 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577667 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577717 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577734 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.577745 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680268 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680329 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.680340 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.782975 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.783032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.783044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.783059 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.783071 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885090 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885140 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885156 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.885168 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987356 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987364 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987408 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:08 crc kubenswrapper[4593]: I0129 11:00:08.987417 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:08Z","lastTransitionTime":"2026-01-29T11:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089200 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089248 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.089286 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.090292 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:17:34.892876041 +0000 UTC Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192024 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192076 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192091 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.192101 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294422 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294455 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294465 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294481 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.294490 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396774 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.396816 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499161 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499195 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.499225 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602224 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602252 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602260 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.602284 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704726 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704765 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704776 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704792 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.704805 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807307 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807322 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.807364 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909293 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909338 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:09 crc kubenswrapper[4593]: I0129 11:00:09.909363 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:09Z","lastTransitionTime":"2026-01-29T11:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013166 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013223 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013237 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.013278 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.074739 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:10 crc kubenswrapper[4593]: E0129 11:00:10.075102 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.074820 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:10 crc kubenswrapper[4593]: E0129 11:00:10.075330 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.074836 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:10 crc kubenswrapper[4593]: E0129 11:00:10.075536 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.074778 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:10 crc kubenswrapper[4593]: E0129 11:00:10.075754 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.090863 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 21:51:00.299880052 +0000 UTC Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.116707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.117033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.117191 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.117354 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.117475 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.219624 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.219893 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.219954 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.220011 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.220078 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322692 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322711 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322737 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.322754 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425138 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425181 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.425206 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.527332 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.527898 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.528238 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.528437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.528594 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631046 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631414 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.631569 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734342 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734401 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734419 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.734464 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837072 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837100 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837120 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.837130 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.939505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.939739 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.939943 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.940115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:10 crc kubenswrapper[4593]: I0129 11:00:10.940269 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:10Z","lastTransitionTime":"2026-01-29T11:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042770 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042784 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.042815 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.091417 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:08:43.859774815 +0000 UTC Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.144688 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.144931 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.145028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.145117 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.145236 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247235 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247359 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.247381 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.349935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.349970 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.349981 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.349999 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.350011 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451823 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451877 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451902 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.451913 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553553 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553607 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553622 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.553678 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.655986 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.656033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.656042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.656058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.656068 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758323 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758359 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758368 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758382 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.758394 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.860609 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.860908 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.860970 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.861035 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.861095 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963181 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963221 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963231 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963247 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:11 crc kubenswrapper[4593]: I0129 11:00:11.963258 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:11Z","lastTransitionTime":"2026-01-29T11:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065273 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065317 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.065352 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.074474 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.074514 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.074487 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:12 crc kubenswrapper[4593]: E0129 11:00:12.074567 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.074484 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:12 crc kubenswrapper[4593]: E0129 11:00:12.074707 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:12 crc kubenswrapper[4593]: E0129 11:00:12.074750 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:12 crc kubenswrapper[4593]: E0129 11:00:12.074860 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.092938 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 06:52:09.168680573 +0000 UTC Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.166911 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.166967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.166983 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.167005 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.167019 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269813 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269864 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269876 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.269908 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372397 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372432 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372442 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372459 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.372472 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475193 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475202 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.475226 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.577619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.577896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.577965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.578028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.578110 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680593 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680643 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680662 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680679 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.680690 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783368 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783404 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783415 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783430 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.783441 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885700 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885732 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885741 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885773 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.885782 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989724 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989747 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989778 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:12 crc kubenswrapper[4593]: I0129 11:00:12.989799 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:12Z","lastTransitionTime":"2026-01-29T11:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092759 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092809 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092839 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.092855 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.093015 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 12:12:08.067495387 +0000 UTC Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195825 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195889 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195942 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.195964 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298165 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298214 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298228 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298248 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.298264 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400271 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400344 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.400354 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503767 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503806 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503816 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503832 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.503843 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521388 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521446 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.521471 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.536413 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540387 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540467 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540484 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.540499 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.555997 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559295 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559337 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.559347 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.571504 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575332 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575380 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575391 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.575418 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.588047 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591670 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591708 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591722 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.591731 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.603489 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:13Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:13 crc kubenswrapper[4593]: E0129 11:00:13.603601 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605778 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605817 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605847 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.605858 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708792 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708835 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708848 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708865 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.708876 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811519 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811585 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811598 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.811624 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:13 crc kubenswrapper[4593]: I0129 11:00:13.914171 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:13Z","lastTransitionTime":"2026-01-29T11:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016478 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016489 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.016516 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.074921 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.074964 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.074925 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.074921 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:14 crc kubenswrapper[4593]: E0129 11:00:14.075029 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:14 crc kubenswrapper[4593]: E0129 11:00:14.075101 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:14 crc kubenswrapper[4593]: E0129 11:00:14.075181 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:14 crc kubenswrapper[4593]: E0129 11:00:14.075268 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.093143 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:39:59.797656939 +0000 UTC Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119137 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119186 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119222 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.119237 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221731 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221768 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221780 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.221810 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323611 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323619 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.323659 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425870 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425886 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.425895 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528599 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528660 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.528685 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630403 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630433 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630441 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.630461 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732096 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732122 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732131 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732144 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.732153 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.834947 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.835013 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.835024 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.835041 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.835053 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937095 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937132 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:14 crc kubenswrapper[4593]: I0129 11:00:14.937154 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:14Z","lastTransitionTime":"2026-01-29T11:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039143 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039151 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039165 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.039173 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.088131 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.093898 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:44:28.240705258 +0000 UTC Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.099034 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.108069 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.120791 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.136529 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141245 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141331 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141350 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.141393 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.157858 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.168834 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.192400 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.207363 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.218869 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260270 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260324 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.260350 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.261543 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.275023 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.288214 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.298971 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.309685 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.323137 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.335541 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:15Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362461 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362615 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362654 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362668 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.362677 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465442 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465476 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.465510 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.567950 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.568000 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.568013 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.568032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.568045 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670523 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670568 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670579 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670595 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.670608 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.772979 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.773011 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.773019 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.773031 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.773041 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875541 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875584 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875595 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875614 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.875625 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978304 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978364 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978383 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:15 crc kubenswrapper[4593]: I0129 11:00:15.978395 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:15Z","lastTransitionTime":"2026-01-29T11:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.074128 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.074154 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.074189 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:16 crc kubenswrapper[4593]: E0129 11:00:16.074252 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.074189 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:16 crc kubenswrapper[4593]: E0129 11:00:16.074323 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:16 crc kubenswrapper[4593]: E0129 11:00:16.074436 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:16 crc kubenswrapper[4593]: E0129 11:00:16.074523 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081132 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081164 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081192 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081208 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.081219 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.094007 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:04:18.842722269 +0000 UTC Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.183967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.184006 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.184014 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.184031 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.184040 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287038 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287091 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287126 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.287139 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390131 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390173 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390201 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.390214 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493196 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493277 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493300 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.493314 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596580 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596599 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.596613 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.698970 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.699004 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.699014 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.699029 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.699040 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801466 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801515 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801528 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.801537 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:16 crc kubenswrapper[4593]: I0129 11:00:16.904400 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:16Z","lastTransitionTime":"2026-01-29T11:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007042 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007072 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007080 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007094 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.007104 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.094484 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:30:33.623306195 +0000 UTC Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109308 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109354 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109376 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.109385 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212601 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212682 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212695 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.212727 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315112 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315160 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315170 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315182 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.315191 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417754 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417799 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417815 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.417827 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520647 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520686 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520698 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520714 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.520726 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622895 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622929 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622940 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622963 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.622986 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725737 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725770 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725780 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725794 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.725807 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828447 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828495 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828511 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.828522 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930688 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930696 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930712 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:17 crc kubenswrapper[4593]: I0129 11:00:17.930721 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:17Z","lastTransitionTime":"2026-01-29T11:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033473 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033504 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033526 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.033536 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.074355 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.074539 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.074570 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.074711 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.074766 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.074837 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.074908 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.075009 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.075454 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:18 crc kubenswrapper[4593]: E0129 11:00:18.075596 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.094939 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:28:35.189379958 +0000 UTC Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.135964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.136010 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.136022 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.136039 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.136052 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238327 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238338 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.238346 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340740 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340749 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340764 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.340775 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442892 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442900 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.442923 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547226 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547239 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.547248 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649765 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649816 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649832 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.649843 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757345 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.757357 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859314 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859322 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859336 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.859347 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.961509 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.961856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.961964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.962057 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:18 crc kubenswrapper[4593]: I0129 11:00:18.962151 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:18Z","lastTransitionTime":"2026-01-29T11:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064231 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064493 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064653 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.064922 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.095127 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:22:44.950046399 +0000 UTC Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167441 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167524 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167610 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.167769 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272418 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272482 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272517 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.272571 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375280 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375351 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375404 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.375428 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478383 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478443 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478465 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.478516 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580505 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580540 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580550 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.580576 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696306 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696335 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.696348 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799007 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799053 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799065 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799085 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.799098 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.870107 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870242 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.87022446 +0000 UTC m=+149.743258651 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.870339 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.870389 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.870432 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870471 4593 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870505 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.870498717 +0000 UTC m=+149.743532908 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870539 4593 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870676 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870705 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.870678093 +0000 UTC m=+149.743712334 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870715 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870743 4593 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.870809 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.870787146 +0000 UTC m=+149.743821377 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902337 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902358 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902386 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.902407 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:19Z","lastTransitionTime":"2026-01-29T11:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:19 crc kubenswrapper[4593]: I0129 11:00:19.971535 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.972003 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.972114 4593 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.972206 4593 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:00:19 crc kubenswrapper[4593]: E0129 11:00:19.972340 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.972323928 +0000 UTC m=+149.845358119 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.005939 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.006015 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.006034 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.006058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.006077 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.074836 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.074854 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:20 crc kubenswrapper[4593]: E0129 11:00:20.075065 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:20 crc kubenswrapper[4593]: E0129 11:00:20.075176 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.075563 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.075704 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:20 crc kubenswrapper[4593]: E0129 11:00:20.076114 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:20 crc kubenswrapper[4593]: E0129 11:00:20.075957 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.095567 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:45:24.807700675 +0000 UTC Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108487 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108518 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.108528 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210743 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210788 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210802 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210822 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.210836 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313178 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313189 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.313214 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.416564 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.416853 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.416926 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.417229 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.417326 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.520927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.521299 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.521375 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.521452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.521554 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624527 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624592 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624602 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.624626 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727193 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727241 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727251 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.727280 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829204 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829301 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829315 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829330 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.829339 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931138 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931147 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931161 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:20 crc kubenswrapper[4593]: I0129 11:00:20.931172 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:20Z","lastTransitionTime":"2026-01-29T11:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033074 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033125 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033140 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.033151 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.096352 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:13:48.295170131 +0000 UTC Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136066 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136125 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.136152 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238308 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238340 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238349 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.238372 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340676 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340719 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340731 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.340758 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443453 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443490 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443498 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443514 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.443523 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545362 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545398 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545408 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545423 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.545434 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647627 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647653 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647669 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.647680 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750199 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750240 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750250 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750265 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.750275 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852904 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852920 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.852929 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954772 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954818 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954831 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954846 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:21 crc kubenswrapper[4593]: I0129 11:00:21.954858 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:21Z","lastTransitionTime":"2026-01-29T11:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057384 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057415 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.057463 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.073844 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.073882 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.073851 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:22 crc kubenswrapper[4593]: E0129 11:00:22.073948 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:22 crc kubenswrapper[4593]: E0129 11:00:22.074071 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:22 crc kubenswrapper[4593]: E0129 11:00:22.074174 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.074600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:22 crc kubenswrapper[4593]: E0129 11:00:22.074687 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.097317 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:48:48.042515436 +0000 UTC Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161053 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161099 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161108 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.161134 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263515 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263556 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263583 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.263594 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.366998 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.367083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.367098 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.367119 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.367131 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468862 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468921 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468938 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468961 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.468977 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.570991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.571018 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.571026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.571039 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.571047 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673704 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673758 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673774 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.673814 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775481 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775522 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.775546 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878251 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878499 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878590 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878678 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.878746 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981496 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981537 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981548 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:22 crc kubenswrapper[4593]: I0129 11:00:22.981577 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:22Z","lastTransitionTime":"2026-01-29T11:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082900 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082937 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082947 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082967 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.082979 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.087663 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.097825 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:28:19.67738572 +0000 UTC Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185659 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185673 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185690 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.185700 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288058 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288139 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288161 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.288203 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390756 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390797 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390807 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390823 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.390832 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493816 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493869 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493888 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493916 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.493934 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597022 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597095 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597107 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.597135 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699518 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699555 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.699585 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801379 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801434 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.801463 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903856 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903904 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903914 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903935 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.903946 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.905894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.905974 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.905985 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.906028 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.906055 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.918499 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921870 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.921936 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.934806 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938292 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938302 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938317 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.938328 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.948543 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953276 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953309 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953336 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.953347 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.965315 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968607 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968644 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:23 crc kubenswrapper[4593]: I0129 11:00:23.968653 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:23Z","lastTransitionTime":"2026-01-29T11:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.981144 4593 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"670b3c30-a5d0-4b0c-bcf2-4664323fba7b\\\",\\\"systemUUID\\\":\\\"45084d3a-e241-4a9c-9dcd-e9b4966c3a23\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:23Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:23 crc kubenswrapper[4593]: E0129 11:00:23.981363 4593 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006439 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006475 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006484 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.006506 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.074115 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.074159 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.074135 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.074113 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:24 crc kubenswrapper[4593]: E0129 11:00:24.074272 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:24 crc kubenswrapper[4593]: E0129 11:00:24.074489 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:24 crc kubenswrapper[4593]: E0129 11:00:24.074675 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:24 crc kubenswrapper[4593]: E0129 11:00:24.074783 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.098481 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:44:40.607966608 +0000 UTC Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109441 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109525 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109543 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109563 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.109578 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212578 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212611 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212621 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212648 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.212674 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315554 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.315583 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417779 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417827 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417841 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.417882 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521104 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521431 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521566 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521718 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.521809 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624670 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624710 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624723 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624740 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.624752 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727210 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727260 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727284 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.727293 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829660 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829707 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829720 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829739 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.829756 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936073 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936588 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936716 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:24 crc kubenswrapper[4593]: I0129 11:00:24.936888 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:24Z","lastTransitionTime":"2026-01-29T11:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040313 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040349 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040357 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040371 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.040383 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.088800 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.088129 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27d4efcc-5516-48f8-b823-410c48349569\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96af555718c85d958e5e6ff04df0c2a39cf2a2d90ed75aa8ce3de1aeccd58ff2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d58235ff8efa3285de647904b309802e9e59de3498d59d86437eae4b9afa2ad1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d58235ff8efa3285de647904b309802e9e59de3498d59d86437eae4b9afa2ad1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.098976 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:47:22.273448353 +0000 UTC Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.101092 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.110928 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47b33c04-1415-41d1-9264-1c4b9de87fff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75a611d1737e3c2cd75e1a8813fc80e9f0885da54a8e3db6aa9ea938b61c9f83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://573514a3ecdd373fbb3e7fb5a601c2f83f0a16b3301f1419f676c41fa6f6fa83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8fhqm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qb424\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.120293 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d229804-724c-4e21-89ac-e3369b615389\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:30Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t27pv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:30Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7jm9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.131916 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4625d53e-4f87-4e6b-8330-da75908b561a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e78d4bf93e700d23ee3fa0cdb330e8cea013f46915657114d6f41cfd44ce542\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d94bedb662fe7bca202c89266575dd897a48367ab27d0ef1eb0efac586ce4889\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f4d29d280c361ebf535962c73e4d4fcffb862c0b343a356d6f951645519082a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143073 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143323 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143423 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143516 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.143590 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.146514 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zk9np" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bf08558-eb2b-4c00-8494-6f9691a7e3b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49b97b630b7bcee960042933f963a70da0cfd891381e5dcd8cfa7385cf1af0c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7fd2f071325508d7c575ebbb13a9b8d69dc2173c90857c31b0cb696c32c27a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bad1fd0a905b1e78e433c450c4f366447e3f8fdf89c6b154b5ce7a7388e32b27\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56ae7fc349ed5f432b5577a9961508feb68a0bca356aaf15677dda22565a2c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b562554dde3914a861bf52a1ee80a829bcb4ef5817b698ed8f833a00bd767e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b546ca1961933625c91b9f8641cd074fb90f6235b90ba7e807f7af24fda4c5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f09caf5ffcfcfd6f1b0420e151a3cc81151a07c91e1b669b54a6b9bd5b1b022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r7p5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zk9np\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.164354 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"943b00a1-4aae-4054-b4fd-dc512fe58270\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"message\\\":\\\"hift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0129 11:00:06.198924 6472 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-controller-manager-operator/metrics]} name:Service_openshift-kube-controller-manager-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.219:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {3ec9f67e-7758-4707-a6d0-2dc28f28ac37}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0129 11:00:06.198955 6472 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfpld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vmt7l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.178196 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.191813 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.204171 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://414e0a690f82a6699c3598f2c939405cc9868b7306115e18946a61773538d0f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ed6a824cf995ec951efe231fd3f2f7cd9b9a3a034da9e8e7e6b4aaf382fc5a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.216489 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc72f43380cc49174b630035cded1a2b6cdb2e1670aba433761cdfd6aa7e78b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.228174 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5eed1f11-8e73-4894-965f-a670f6c877b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae708227c75c15d41f6b25ffc473a53301a96a25eea0d9e178e2b5cdf1dbe7e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55q6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p4zf2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.240057 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5026ebda-6390-490e-bdda-0f9a1de13f06\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://770eac823720571be84970ca91371624bf9a1ef60d4c0ea4dc0011cb1319aa18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06898c0c80943cfb41dfb8b2f126694ec289f605b86e24c7df0bf68a15c4ee7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://100535b62a75f14594466d97f789106e9a51f35605ef3250a2b2e067568e6d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1e1a72b21fc1b77cfd3259a3de059d9cf23817a2629ae74890835989909eafe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248264 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248335 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.248362 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.251755 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b28ebaa7-bd83-4239-8d22-71b82cdc8d0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T10:59:15Z\\\",\\\"message\\\":\\\"le observer\\\\nW0129 10:59:15.141129 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0129 10:59:15.145916 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 10:59:15.146798 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2920622696/tls.crt::/tmp/serving-cert-2920622696/tls.key\\\\\\\"\\\\nI0129 10:59:15.592128 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 10:59:15.596563 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 10:59:15.596695 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 10:59:15.596746 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 10:59:15.596777 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 10:59:15.603461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 10:59:15.603480 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603483 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 10:59:15.603487 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0129 10:59:15.603490 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 10:59:15.603493 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 10:59:15.603496 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0129 10:59:15.603586 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0129 10:59:15.605453 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:58:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T10:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T10:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:58:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.265919 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24574f9bf312e4e5732012cc9d2bc9d674ee0457b90a701bbb505332693b228a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.276439 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mkxdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b36fce0b-62b3-4076-a13e-e6048a4d9a4e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f766f7cfeefed62617449558379c1e34cca3908a9dedc5e79449ebdcaae032a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gjtz8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mkxdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.287387 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-xpt4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76afd0b-36c6-4faa-9278-c08c60c483e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T11:00:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T11:00:04Z\\\",\\\"message\\\":\\\"2026-01-29T10:59:19+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441\\\\n2026-01-29T10:59:19+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_54d2f8d5-9d8a-4529-b4b4-e1c8695c8441 to /host/opt/cni/bin/\\\\n2026-01-29T10:59:19Z [verbose] multus-daemon started\\\\n2026-01-29T10:59:19Z [verbose] Readiness Indicator file check\\\\n2026-01-29T11:00:04Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T10:59:17Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T11:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhqmv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xpt4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.297136 4593 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-42qv9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bae5deb1-f488-4080-8a68-215c491015f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T10:59:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1290436f2601692a4eda8de6f157dea6fca1c39202ea36f80f8324ba6d254ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T10:59:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2kd2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T10:59:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-42qv9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T11:00:25Z is after 2025-08-24T17:21:41Z" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350145 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350178 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350188 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.350213 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452622 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452746 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452760 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452774 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.452784 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555157 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555193 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555210 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.555269 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658040 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658115 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658128 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658144 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.658157 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761203 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761244 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761258 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.761267 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863818 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863850 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863872 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.863881 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967083 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967120 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967163 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967470 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:25 crc kubenswrapper[4593]: I0129 11:00:25.967488 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:25Z","lastTransitionTime":"2026-01-29T11:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070328 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070393 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070402 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070420 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.070435 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.074689 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.074722 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.074689 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.074863 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:26 crc kubenswrapper[4593]: E0129 11:00:26.074910 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:26 crc kubenswrapper[4593]: E0129 11:00:26.074969 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:26 crc kubenswrapper[4593]: E0129 11:00:26.075046 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:26 crc kubenswrapper[4593]: E0129 11:00:26.075098 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.099725 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:21:40.955780956 +0000 UTC Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173086 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173123 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173133 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173152 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.173166 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275948 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275957 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275972 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.275981 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.378557 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.378801 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.378868 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.378938 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.379027 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481797 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481806 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.481859 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584227 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584280 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584288 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584301 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.584328 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687098 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687148 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687160 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687177 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.687191 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789756 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789792 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789801 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789815 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.789828 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892274 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892321 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892333 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892352 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.892363 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994861 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994896 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994907 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994922 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:26 crc kubenswrapper[4593]: I0129 11:00:26.994936 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:26Z","lastTransitionTime":"2026-01-29T11:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097613 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097673 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097684 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097700 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.097711 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.100747 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 10:20:42.687564196 +0000 UTC Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.200893 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.201297 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.201433 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.201536 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.201614 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304462 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304500 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304510 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304524 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.304534 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407347 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407379 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407401 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.407414 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509723 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509764 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509780 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509800 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.509812 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612578 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612623 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612662 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612679 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.612690 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715195 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715508 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715575 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715665 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.715760 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818308 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818348 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818359 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818374 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.818387 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920759 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920835 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920844 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920858 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:27 crc kubenswrapper[4593]: I0129 11:00:27.920866 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:27Z","lastTransitionTime":"2026-01-29T11:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023617 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023674 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023685 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023699 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.023712 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.074426 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.074374 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.074464 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.074486 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:28 crc kubenswrapper[4593]: E0129 11:00:28.075034 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:28 crc kubenswrapper[4593]: E0129 11:00:28.075113 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:28 crc kubenswrapper[4593]: E0129 11:00:28.075205 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:28 crc kubenswrapper[4593]: E0129 11:00:28.075153 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.102007 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 06:06:31.22182521 +0000 UTC Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.125932 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.125972 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.125982 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.126000 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.126014 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228133 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228162 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228171 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228184 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.228193 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330312 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330346 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330355 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330370 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.330381 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433311 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433550 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433652 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433725 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.433780 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.536860 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.537124 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.537213 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.537339 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.537427 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640185 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640216 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640225 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640239 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.640249 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743236 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743275 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.743315 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845295 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845305 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845320 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.845331 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947483 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947538 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947561 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:28 crc kubenswrapper[4593]: I0129 11:00:28.947572 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:28Z","lastTransitionTime":"2026-01-29T11:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050259 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050303 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050318 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.050327 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.075712 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:29 crc kubenswrapper[4593]: E0129 11:00:29.075921 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.102609 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:41:56.670921189 +0000 UTC Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152364 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152409 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152417 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152432 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.152442 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.254570 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.254849 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.254918 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.255002 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.255108 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358110 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358512 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358577 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358704 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.358784 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.460927 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.461149 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.461266 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.461350 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.461430 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564186 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564234 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564243 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564261 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.564269 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.665971 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.666023 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.666033 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.666049 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.666060 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768383 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768414 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768424 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768439 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.768451 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871648 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871675 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871683 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871696 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.871704 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974337 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974381 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974397 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974413 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:29 crc kubenswrapper[4593]: I0129 11:00:29.974422 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:29Z","lastTransitionTime":"2026-01-29T11:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.074572 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.074617 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.074600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:30 crc kubenswrapper[4593]: E0129 11:00:30.074721 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.074597 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:30 crc kubenswrapper[4593]: E0129 11:00:30.074807 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:30 crc kubenswrapper[4593]: E0129 11:00:30.074931 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:30 crc kubenswrapper[4593]: E0129 11:00:30.075005 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075944 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075965 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075974 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075984 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.075993 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.103186 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:09:11.177025138 +0000 UTC Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.177936 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.177991 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.178005 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.178027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.178042 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280007 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280062 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280074 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280094 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.280107 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382559 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382607 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382620 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382677 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.382699 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485200 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485249 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485262 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485280 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.485293 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588599 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588611 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588626 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.588654 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691172 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691211 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691221 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691238 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.691248 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793499 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793569 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793585 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793604 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.793615 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896088 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896134 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896198 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896224 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.896241 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998550 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998615 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998624 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998650 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:30 crc kubenswrapper[4593]: I0129 11:00:30.998662 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:30Z","lastTransitionTime":"2026-01-29T11:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100757 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100807 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100821 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.100832 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.105692 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 19:42:57.076518253 +0000 UTC Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204272 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204487 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204549 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204608 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.204692 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306752 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306812 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306832 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306860 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.306884 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.409997 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.410032 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.410040 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.410055 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.410065 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511787 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511912 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511934 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.511950 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614051 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614112 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614126 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614141 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.614153 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716809 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716894 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716910 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716934 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.716951 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819477 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819545 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819558 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819578 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.819593 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.922506 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.922810 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.922903 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.923037 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:31 crc kubenswrapper[4593]: I0129 11:00:31.923162 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:31Z","lastTransitionTime":"2026-01-29T11:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025078 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025116 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025127 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025142 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.025153 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.074682 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:32 crc kubenswrapper[4593]: E0129 11:00:32.075002 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.074702 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:32 crc kubenswrapper[4593]: E0129 11:00:32.075209 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.074682 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:32 crc kubenswrapper[4593]: E0129 11:00:32.075370 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.074770 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:32 crc kubenswrapper[4593]: E0129 11:00:32.075599 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.108055 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:56:36.496167571 +0000 UTC Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126818 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126859 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126870 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126884 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.126893 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.228913 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.229205 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.229289 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.229380 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.229475 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331789 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331829 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331844 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331865 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.331884 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433294 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433326 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433336 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433353 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.433361 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535534 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535571 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535582 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535595 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.535604 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.637964 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.638015 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.638027 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.638044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.638056 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740363 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740441 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740452 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740468 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.740480 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842316 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842407 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842421 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842437 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.842447 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944269 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944343 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944356 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944380 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:32 crc kubenswrapper[4593]: I0129 11:00:32.944390 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:32Z","lastTransitionTime":"2026-01-29T11:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046673 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046796 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046817 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046840 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.046862 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.108646 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:50:45.693748619 +0000 UTC Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149513 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149546 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149555 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149568 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.149577 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.251962 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.252011 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.252026 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.252044 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.252056 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355060 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355090 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355099 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355114 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.355123 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457445 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457486 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457497 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457514 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.457526 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559804 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559845 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559857 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559872 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.559883 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661517 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661552 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661561 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661574 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.661582 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763761 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763815 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763837 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763859 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.763869 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867118 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867406 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867492 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867589 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.867807 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.970855 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.971138 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.971221 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.971326 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:33 crc kubenswrapper[4593]: I0129 11:00:33.971426 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:33Z","lastTransitionTime":"2026-01-29T11:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.013930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.014110 4593 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.014221 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs podName:7d229804-724c-4e21-89ac-e3369b615389 nodeName:}" failed. No retries permitted until 2026-01-29 11:01:38.01419598 +0000 UTC m=+163.887230251 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs") pod "network-metrics-daemon-7jm9m" (UID: "7d229804-724c-4e21-89ac-e3369b615389") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.073989 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074057 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074390 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074406 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074429 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074086 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074445 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:34Z","lastTransitionTime":"2026-01-29T11:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074087 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.074559 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.074058 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.074699 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.074351 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:34 crc kubenswrapper[4593]: E0129 11:00:34.074887 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082360 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082460 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082532 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082598 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.082676 4593 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T11:00:34Z","lastTransitionTime":"2026-01-29T11:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.108867 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:19:41.479918552 +0000 UTC Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.108953 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.119329 4593 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.123491 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw"] Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.123923 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.127820 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.128140 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.128807 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.129026 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.154526 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podStartSLOduration=78.154503382 podStartE2EDuration="1m18.154503382s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.143123098 +0000 UTC m=+100.016157289" watchObservedRunningTime="2026-01-29 11:00:34.154503382 +0000 UTC m=+100.027537573" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.215951 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.216014 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.216038 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.216060 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.216121 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.217459 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xpt4q" podStartSLOduration=78.217447329 podStartE2EDuration="1m18.217447329s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.216658218 +0000 UTC m=+100.089692419" watchObservedRunningTime="2026-01-29 11:00:34.217447329 +0000 UTC m=+100.090481530" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.217666 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mkxdt" podStartSLOduration=78.217660655 podStartE2EDuration="1m18.217660655s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.202531898 +0000 UTC m=+100.075566089" watchObservedRunningTime="2026-01-29 11:00:34.217660655 +0000 UTC m=+100.090694846" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.226365 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-42qv9" podStartSLOduration=78.226347325 podStartE2EDuration="1m18.226347325s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.225771889 +0000 UTC m=+100.098806100" watchObservedRunningTime="2026-01-29 11:00:34.226347325 +0000 UTC m=+100.099381516" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.253493 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.253478403 podStartE2EDuration="43.253478403s" podCreationTimestamp="2026-01-29 10:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.252846205 +0000 UTC m=+100.125880406" watchObservedRunningTime="2026-01-29 11:00:34.253478403 +0000 UTC m=+100.126512594" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.274726 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.274707759 podStartE2EDuration="9.274707759s" podCreationTimestamp="2026-01-29 11:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.274481543 +0000 UTC m=+100.147515764" watchObservedRunningTime="2026-01-29 11:00:34.274707759 +0000 UTC m=+100.147741950" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.291913 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.291893433 podStartE2EDuration="1m18.291893433s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.291450631 +0000 UTC m=+100.164484822" watchObservedRunningTime="2026-01-29 11:00:34.291893433 +0000 UTC m=+100.164927634" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.316872 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.316857 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=11.316838092 podStartE2EDuration="11.316838092s" podCreationTimestamp="2026-01-29 11:00:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.316809121 +0000 UTC m=+100.189843322" watchObservedRunningTime="2026-01-29 11:00:34.316838092 +0000 UTC m=+100.189872283" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.316927 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.316984 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317001 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317061 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317099 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.317942 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.326282 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.340195 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4b66123-cd65-43f4-8c09-ca4b8537e2e8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g2qsw\" (UID: \"c4b66123-cd65-43f4-8c09-ca4b8537e2e8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.361532 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qb424" podStartSLOduration=78.361484314 podStartE2EDuration="1m18.361484314s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.350958974 +0000 UTC m=+100.223993165" watchObservedRunningTime="2026-01-29 11:00:34.361484314 +0000 UTC m=+100.234518505" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.373419 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=79.373400473 podStartE2EDuration="1m19.373400473s" podCreationTimestamp="2026-01-29 10:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.373260618 +0000 UTC m=+100.246294809" watchObservedRunningTime="2026-01-29 11:00:34.373400473 +0000 UTC m=+100.246434664" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.423428 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zk9np" podStartSLOduration=78.423411292 podStartE2EDuration="1m18.423411292s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:34.396754537 +0000 UTC m=+100.269788748" watchObservedRunningTime="2026-01-29 11:00:34.423411292 +0000 UTC m=+100.296445483" Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.447052 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" Jan 29 11:00:34 crc kubenswrapper[4593]: W0129 11:00:34.460149 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4b66123_cd65_43f4_8c09_ca4b8537e2e8.slice/crio-6d8c9afc6ac792586d940828f69e7c7f87c62169dc24ea4f9e3c81f77014ef86 WatchSource:0}: Error finding container 6d8c9afc6ac792586d940828f69e7c7f87c62169dc24ea4f9e3c81f77014ef86: Status 404 returned error can't find the container with id 6d8c9afc6ac792586d940828f69e7c7f87c62169dc24ea4f9e3c81f77014ef86 Jan 29 11:00:34 crc kubenswrapper[4593]: I0129 11:00:34.581424 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" event={"ID":"c4b66123-cd65-43f4-8c09-ca4b8537e2e8","Type":"ContainerStarted","Data":"6d8c9afc6ac792586d940828f69e7c7f87c62169dc24ea4f9e3c81f77014ef86"} Jan 29 11:00:35 crc kubenswrapper[4593]: I0129 11:00:35.585307 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" event={"ID":"c4b66123-cd65-43f4-8c09-ca4b8537e2e8","Type":"ContainerStarted","Data":"3288bb11a1c18beee2c5f4b89aca8e57baa50fa7494b4f22575ad2c6ac8b9e5b"} Jan 29 11:00:36 crc kubenswrapper[4593]: I0129 11:00:36.074841 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:36 crc kubenswrapper[4593]: I0129 11:00:36.074904 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:36 crc kubenswrapper[4593]: E0129 11:00:36.075182 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:36 crc kubenswrapper[4593]: E0129 11:00:36.075305 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:36 crc kubenswrapper[4593]: I0129 11:00:36.075371 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:36 crc kubenswrapper[4593]: I0129 11:00:36.075390 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:36 crc kubenswrapper[4593]: E0129 11:00:36.075464 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:36 crc kubenswrapper[4593]: E0129 11:00:36.075522 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:38 crc kubenswrapper[4593]: I0129 11:00:38.074683 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:38 crc kubenswrapper[4593]: I0129 11:00:38.074713 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:38 crc kubenswrapper[4593]: I0129 11:00:38.074714 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:38 crc kubenswrapper[4593]: I0129 11:00:38.074683 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:38 crc kubenswrapper[4593]: E0129 11:00:38.074821 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:38 crc kubenswrapper[4593]: E0129 11:00:38.074952 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:38 crc kubenswrapper[4593]: E0129 11:00:38.075039 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:38 crc kubenswrapper[4593]: E0129 11:00:38.074910 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:40 crc kubenswrapper[4593]: I0129 11:00:40.074140 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:40 crc kubenswrapper[4593]: I0129 11:00:40.074205 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:40 crc kubenswrapper[4593]: I0129 11:00:40.074742 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:40 crc kubenswrapper[4593]: I0129 11:00:40.074860 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:40 crc kubenswrapper[4593]: E0129 11:00:40.074960 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:40 crc kubenswrapper[4593]: E0129 11:00:40.075108 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:40 crc kubenswrapper[4593]: E0129 11:00:40.075197 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:40 crc kubenswrapper[4593]: E0129 11:00:40.075272 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:42 crc kubenswrapper[4593]: I0129 11:00:42.074345 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:42 crc kubenswrapper[4593]: I0129 11:00:42.074414 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:42 crc kubenswrapper[4593]: I0129 11:00:42.074344 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:42 crc kubenswrapper[4593]: I0129 11:00:42.074346 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:42 crc kubenswrapper[4593]: E0129 11:00:42.074485 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:42 crc kubenswrapper[4593]: E0129 11:00:42.074579 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:42 crc kubenswrapper[4593]: E0129 11:00:42.074693 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:42 crc kubenswrapper[4593]: E0129 11:00:42.074757 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.074446 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.074536 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.074582 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.074865 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.074890 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.075350 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.075460 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:44 crc kubenswrapper[4593]: I0129 11:00:44.075521 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.075783 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:44 crc kubenswrapper[4593]: E0129 11:00:44.075812 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vmt7l_openshift-ovn-kubernetes(943b00a1-4aae-4054-b4fd-dc512fe58270)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" Jan 29 11:00:46 crc kubenswrapper[4593]: I0129 11:00:46.074750 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:46 crc kubenswrapper[4593]: I0129 11:00:46.074764 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:46 crc kubenswrapper[4593]: I0129 11:00:46.074900 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:46 crc kubenswrapper[4593]: E0129 11:00:46.075404 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:46 crc kubenswrapper[4593]: E0129 11:00:46.075068 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:46 crc kubenswrapper[4593]: E0129 11:00:46.075001 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:46 crc kubenswrapper[4593]: I0129 11:00:46.074765 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:46 crc kubenswrapper[4593]: E0129 11:00:46.076229 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:48 crc kubenswrapper[4593]: I0129 11:00:48.073909 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:48 crc kubenswrapper[4593]: E0129 11:00:48.074666 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:48 crc kubenswrapper[4593]: I0129 11:00:48.074003 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:48 crc kubenswrapper[4593]: E0129 11:00:48.074885 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:48 crc kubenswrapper[4593]: I0129 11:00:48.073920 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:48 crc kubenswrapper[4593]: E0129 11:00:48.075116 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:48 crc kubenswrapper[4593]: I0129 11:00:48.074042 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:48 crc kubenswrapper[4593]: E0129 11:00:48.075326 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:50 crc kubenswrapper[4593]: I0129 11:00:50.074585 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:50 crc kubenswrapper[4593]: E0129 11:00:50.075244 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:50 crc kubenswrapper[4593]: I0129 11:00:50.074751 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:50 crc kubenswrapper[4593]: E0129 11:00:50.075506 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:50 crc kubenswrapper[4593]: I0129 11:00:50.074769 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:50 crc kubenswrapper[4593]: E0129 11:00:50.075591 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:50 crc kubenswrapper[4593]: I0129 11:00:50.074657 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:50 crc kubenswrapper[4593]: E0129 11:00:50.075715 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.631824 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/1.log" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.632381 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/0.log" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.632431 4593 generic.go:334] "Generic (PLEG): container finished" podID="c76afd0b-36c6-4faa-9278-c08c60c483e9" containerID="ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117" exitCode=1 Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.632459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerDied","Data":"ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117"} Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.632492 4593 scope.go:117] "RemoveContainer" containerID="c784c57bb52e16386a81562c12066500836976eade9505aaada1c3daadd69d08" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.633366 4593 scope.go:117] "RemoveContainer" containerID="ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117" Jan 29 11:00:51 crc kubenswrapper[4593]: E0129 11:00:51.634076 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-xpt4q_openshift-multus(c76afd0b-36c6-4faa-9278-c08c60c483e9)\"" pod="openshift-multus/multus-xpt4q" podUID="c76afd0b-36c6-4faa-9278-c08c60c483e9" Jan 29 11:00:51 crc kubenswrapper[4593]: I0129 11:00:51.650622 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g2qsw" podStartSLOduration=95.650604661 podStartE2EDuration="1m35.650604661s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:35.604850283 +0000 UTC m=+101.477884524" watchObservedRunningTime="2026-01-29 11:00:51.650604661 +0000 UTC m=+117.523638862" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.074251 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.074328 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:52 crc kubenswrapper[4593]: E0129 11:00:52.074389 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:52 crc kubenswrapper[4593]: E0129 11:00:52.074451 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.074494 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.074328 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:52 crc kubenswrapper[4593]: E0129 11:00:52.074533 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:52 crc kubenswrapper[4593]: E0129 11:00:52.074618 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:52 crc kubenswrapper[4593]: I0129 11:00:52.636311 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/1.log" Jan 29 11:00:54 crc kubenswrapper[4593]: I0129 11:00:54.074477 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:54 crc kubenswrapper[4593]: E0129 11:00:54.074609 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:54 crc kubenswrapper[4593]: I0129 11:00:54.074725 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:54 crc kubenswrapper[4593]: E0129 11:00:54.074785 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:54 crc kubenswrapper[4593]: I0129 11:00:54.075055 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:54 crc kubenswrapper[4593]: E0129 11:00:54.075108 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:54 crc kubenswrapper[4593]: I0129 11:00:54.075238 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:54 crc kubenswrapper[4593]: E0129 11:00:54.075408 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:55 crc kubenswrapper[4593]: E0129 11:00:55.105972 4593 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 11:00:55 crc kubenswrapper[4593]: E0129 11:00:55.186173 4593 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.074815 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.074944 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.074990 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.075007 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.075020 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.075079 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.075467 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.075553 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.075763 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.649305 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.651475 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerStarted","Data":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.652567 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.922166 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podStartSLOduration=100.922142296 podStartE2EDuration="1m40.922142296s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:00:56.682134114 +0000 UTC m=+122.555168305" watchObservedRunningTime="2026-01-29 11:00:56.922142296 +0000 UTC m=+122.795176507" Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.923895 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7jm9m"] Jan 29 11:00:56 crc kubenswrapper[4593]: I0129 11:00:56.924062 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:56 crc kubenswrapper[4593]: E0129 11:00:56.924196 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:00:58 crc kubenswrapper[4593]: I0129 11:00:58.074711 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:00:58 crc kubenswrapper[4593]: I0129 11:00:58.074753 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:00:58 crc kubenswrapper[4593]: I0129 11:00:58.074753 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:00:58 crc kubenswrapper[4593]: E0129 11:00:58.074870 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:00:58 crc kubenswrapper[4593]: E0129 11:00:58.074950 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:00:58 crc kubenswrapper[4593]: E0129 11:00:58.075011 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:00:59 crc kubenswrapper[4593]: I0129 11:00:59.074728 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:00:59 crc kubenswrapper[4593]: E0129 11:00:59.074894 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:00 crc kubenswrapper[4593]: I0129 11:01:00.074600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:00 crc kubenswrapper[4593]: I0129 11:01:00.074665 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:00 crc kubenswrapper[4593]: I0129 11:01:00.074688 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:00 crc kubenswrapper[4593]: E0129 11:01:00.074756 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:00 crc kubenswrapper[4593]: E0129 11:01:00.074803 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:00 crc kubenswrapper[4593]: E0129 11:01:00.074870 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:00 crc kubenswrapper[4593]: E0129 11:01:00.187911 4593 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:01:01 crc kubenswrapper[4593]: I0129 11:01:01.074726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:01 crc kubenswrapper[4593]: E0129 11:01:01.074877 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:02 crc kubenswrapper[4593]: I0129 11:01:02.074057 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:02 crc kubenswrapper[4593]: I0129 11:01:02.074129 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:02 crc kubenswrapper[4593]: E0129 11:01:02.074219 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:02 crc kubenswrapper[4593]: E0129 11:01:02.074255 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:02 crc kubenswrapper[4593]: I0129 11:01:02.074739 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:02 crc kubenswrapper[4593]: E0129 11:01:02.074816 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:03 crc kubenswrapper[4593]: I0129 11:01:03.075041 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:03 crc kubenswrapper[4593]: E0129 11:01:03.075240 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.074568 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.074583 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:04 crc kubenswrapper[4593]: E0129 11:01:04.074740 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:04 crc kubenswrapper[4593]: E0129 11:01:04.074841 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.075167 4593 scope.go:117] "RemoveContainer" containerID="ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.074800 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:04 crc kubenswrapper[4593]: E0129 11:01:04.075763 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.673418 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/1.log" Jan 29 11:01:04 crc kubenswrapper[4593]: I0129 11:01:04.673774 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2"} Jan 29 11:01:05 crc kubenswrapper[4593]: I0129 11:01:05.074417 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:05 crc kubenswrapper[4593]: E0129 11:01:05.075827 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:05 crc kubenswrapper[4593]: E0129 11:01:05.188317 4593 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 11:01:06 crc kubenswrapper[4593]: I0129 11:01:06.074414 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:06 crc kubenswrapper[4593]: I0129 11:01:06.074420 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:06 crc kubenswrapper[4593]: E0129 11:01:06.074581 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:06 crc kubenswrapper[4593]: E0129 11:01:06.074682 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:06 crc kubenswrapper[4593]: I0129 11:01:06.074438 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:06 crc kubenswrapper[4593]: E0129 11:01:06.074759 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:07 crc kubenswrapper[4593]: I0129 11:01:07.074743 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:07 crc kubenswrapper[4593]: E0129 11:01:07.074881 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:08 crc kubenswrapper[4593]: I0129 11:01:08.074622 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:08 crc kubenswrapper[4593]: I0129 11:01:08.074710 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:08 crc kubenswrapper[4593]: E0129 11:01:08.074775 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:08 crc kubenswrapper[4593]: E0129 11:01:08.074930 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:08 crc kubenswrapper[4593]: I0129 11:01:08.075277 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:08 crc kubenswrapper[4593]: E0129 11:01:08.075415 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:09 crc kubenswrapper[4593]: I0129 11:01:09.074622 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:09 crc kubenswrapper[4593]: E0129 11:01:09.075071 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7jm9m" podUID="7d229804-724c-4e21-89ac-e3369b615389" Jan 29 11:01:10 crc kubenswrapper[4593]: I0129 11:01:10.074207 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:10 crc kubenswrapper[4593]: I0129 11:01:10.074263 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:10 crc kubenswrapper[4593]: E0129 11:01:10.074323 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 11:01:10 crc kubenswrapper[4593]: E0129 11:01:10.074453 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 11:01:10 crc kubenswrapper[4593]: I0129 11:01:10.074207 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:10 crc kubenswrapper[4593]: E0129 11:01:10.074535 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 11:01:11 crc kubenswrapper[4593]: I0129 11:01:11.074234 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:11 crc kubenswrapper[4593]: I0129 11:01:11.077048 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 11:01:11 crc kubenswrapper[4593]: I0129 11:01:11.077573 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.074419 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.074527 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.074600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.077344 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.077793 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.077952 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 11:01:12 crc kubenswrapper[4593]: I0129 11:01:12.077990 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.695913 4593 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.734306 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.734821 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.735239 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m9zzn"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.735836 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.738739 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.739252 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.739565 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.739873 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.739971 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.740411 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.741053 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.741097 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.741377 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.741504 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.744140 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.744224 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.745424 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.745451 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.746071 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.750007 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.750351 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751512 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751696 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751720 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751853 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.751938 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.753048 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754298 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754555 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754674 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754983 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.754994 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.758075 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.758153 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.759351 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.760776 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.760818 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.762909 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.764824 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.764829 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.768127 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gl968"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.769009 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.771355 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.783713 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.784776 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.785820 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.786278 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.786793 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.787420 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.787704 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.792989 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.793227 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gz9wd"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.793591 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-t7wn4"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.793850 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fm7cc"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.794110 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.794719 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.794944 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.795505 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.796009 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vtdww"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.796455 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.796757 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.796894 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.797102 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.797131 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.798086 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.801960 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.802955 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.804754 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.805299 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.808033 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m9zzn"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814578 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814682 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814719 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814753 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814787 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21b7f343-d887-4bdf-85c0-9639179e9c56-machine-approver-tls\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814891 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814962 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.814998 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815002 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815034 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-node-pullsecrets\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815088 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-encryption-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815117 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43e8598d-f86e-425e-8418-bcfb93e3bd63-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7vb\" (UniqueName: \"kubernetes.io/projected/21b7f343-d887-4bdf-85c0-9639179e9c56-kube-api-access-mq7vb\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815183 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q57bg\" (UniqueName: \"kubernetes.io/projected/43e8598d-f86e-425e-8418-bcfb93e3bd63-kube-api-access-q57bg\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815247 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815279 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5xjz\" (UniqueName: \"kubernetes.io/projected/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-kube-api-access-r5xjz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.815305 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822796 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d100ddd-343c-48f6-ad0a-e08d3e23a904-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822876 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d100ddd-343c-48f6-ad0a-e08d3e23a904-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822905 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822926 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822949 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822965 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit-dir\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.822985 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e8598d-f86e-425e-8418-bcfb93e3bd63-serving-cert\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823007 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-image-import-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823026 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2clt8\" (UniqueName: \"kubernetes.io/projected/3d100ddd-343c-48f6-ad0a-e08d3e23a904-kube-api-access-2clt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823052 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-auth-proxy-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823072 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-client\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823096 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823120 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823173 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823201 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823220 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-serving-cert\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823262 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78txz\" (UniqueName: \"kubernetes.io/projected/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-kube-api-access-78txz\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.823285 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.824533 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.824621 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.826149 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.826556 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.831151 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.831506 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.831737 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.835729 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.836336 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.836611 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.836733 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.836951 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837435 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837530 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837571 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837724 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837840 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837865 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.837939 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838077 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838109 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838123 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838233 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838266 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838292 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838339 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838357 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838530 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838583 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838623 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838718 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838739 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838759 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838801 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838833 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838721 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838834 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838882 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838805 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838721 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838964 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.838981 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839037 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839047 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839067 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839125 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839134 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839188 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839202 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839247 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839271 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839286 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839303 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839274 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839518 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.839870 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.842488 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.843240 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hr6"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.843354 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.843588 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l64wd"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.843858 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.844316 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.855702 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.856254 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.856581 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.857190 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.857716 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.858169 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.885772 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.886297 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xx52v"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.887456 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.887729 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.888900 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.889528 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.891095 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.891720 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.891996 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.894104 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.920023 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.891722 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924574 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-encryption-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924647 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43e8598d-f86e-425e-8418-bcfb93e3bd63-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924680 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q57bg\" (UniqueName: \"kubernetes.io/projected/43e8598d-f86e-425e-8418-bcfb93e3bd63-kube-api-access-q57bg\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924714 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq7vb\" (UniqueName: \"kubernetes.io/projected/21b7f343-d887-4bdf-85c0-9639179e9c56-kube-api-access-mq7vb\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924750 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5xjz\" (UniqueName: \"kubernetes.io/projected/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-kube-api-access-r5xjz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924785 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924815 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924848 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924882 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d100ddd-343c-48f6-ad0a-e08d3e23a904-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924916 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d100ddd-343c-48f6-ad0a-e08d3e23a904-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924949 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.924973 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit-dir\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925002 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925029 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e8598d-f86e-425e-8418-bcfb93e3bd63-serving-cert\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925057 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-image-import-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925081 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-client\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925111 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2clt8\" (UniqueName: \"kubernetes.io/projected/3d100ddd-343c-48f6-ad0a-e08d3e23a904-kube-api-access-2clt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925143 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-auth-proxy-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925174 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925225 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925329 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925358 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-serving-cert\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925397 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78txz\" (UniqueName: \"kubernetes.io/projected/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-kube-api-access-78txz\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925427 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925459 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925495 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925582 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925615 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.925943 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.931840 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21b7f343-d887-4bdf-85c0-9639179e9c56-machine-approver-tls\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.931915 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.931951 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.932127 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-node-pullsecrets\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.932515 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-node-pullsecrets\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.958580 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-auth-proxy-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.959070 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit-dir\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.959615 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.959937 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.960274 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21b7f343-d887-4bdf-85c0-9639179e9c56-config\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.960585 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d100ddd-343c-48f6-ad0a-e08d3e23a904-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.960965 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.961281 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.961772 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.962366 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-encryption-config\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.962555 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.963236 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.965890 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.966923 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.967549 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.971261 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.982740 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-image-import-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983217 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43e8598d-f86e-425e-8418-bcfb93e3bd63-serving-cert\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983429 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983529 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983706 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.985398 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.985700 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.985978 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.986333 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.986349 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.986359 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rnn8b"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.990111 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.993283 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.993315 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.993731 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994131 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994414 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994610 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994701 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.983784 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995005 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t7wn4"] Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995057 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995066 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995126 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995197 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994129 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995368 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995773 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.995946 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.996141 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.996201 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/43e8598d-f86e-425e-8418-bcfb93e3bd63-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:14 crc kubenswrapper[4593]: I0129 11:01:14.994524 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:14.996858 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-audit\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:14.984578 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:14.997481 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.001572 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.001614 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.002566 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.003474 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d100ddd-343c-48f6-ad0a-e08d3e23a904-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.004237 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/21b7f343-d887-4bdf-85c0-9639179e9c56-machine-approver-tls\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.005521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-serving-cert\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.007229 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.007596 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.008056 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.008185 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-96whs"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.008883 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.009316 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.010841 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.011363 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.011893 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.013140 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-etcd-client\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.013155 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.016686 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vbsqg"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.017272 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.017441 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.018012 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.019853 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.022448 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.022490 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.024143 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vtdww"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.034068 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.043863 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.046186 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.050052 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.051000 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rnn8b"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.052979 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zv27c"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.056118 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-29j27"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.057675 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hr6"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.057740 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-29j27" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.058301 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.063680 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fm7cc"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.076233 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.077990 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.094566 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gz9wd"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.096850 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.099595 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.099877 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.102611 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.105313 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.106841 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.108304 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zv27c"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.110343 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-96whs"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.112687 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l64wd"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.113943 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.115958 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.116604 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.118171 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.119561 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.121917 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.123963 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jnw9r"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.124557 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.125531 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-29j27"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.126982 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.128152 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.129284 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jnw9r"] Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.136904 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.162216 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.180390 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.196962 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.217362 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.238176 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.258674 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.277548 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.301226 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.316727 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.336990 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.357108 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.377173 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.397614 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.417584 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.438292 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.478404 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.497234 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.517597 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.538052 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.557583 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.579040 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.597919 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.617120 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.638414 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.657898 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.678105 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.697666 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.735664 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5xjz\" (UniqueName: \"kubernetes.io/projected/10bf1dd7-30e3-48b9-9651-dcda2f63e89d-kube-api-access-r5xjz\") pod \"openshift-apiserver-operator-796bbdcf4f-n4s5k\" (UID: \"10bf1dd7-30e3-48b9-9651-dcda2f63e89d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.737451 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.777543 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78txz\" (UniqueName: \"kubernetes.io/projected/dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9-kube-api-access-78txz\") pod \"apiserver-76f77b778f-m9zzn\" (UID: \"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9\") " pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.797422 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") pod \"controller-manager-879f6c89f-9td98\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.814866 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") pod \"route-controller-manager-6576b87f9c-fnv5h\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.817146 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.844183 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.857849 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.894343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2clt8\" (UniqueName: \"kubernetes.io/projected/3d100ddd-343c-48f6-ad0a-e08d3e23a904-kube-api-access-2clt8\") pod \"openshift-controller-manager-operator-756b6f6bc6-5gd58\" (UID: \"3d100ddd-343c-48f6-ad0a-e08d3e23a904\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.897588 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.917658 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.938136 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.953892 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.958520 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.963525 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.977799 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.977975 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.996265 4593 request.go:700] Waited for 1.000658019s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&limit=500&resourceVersion=0 Jan 29 11:01:15 crc kubenswrapper[4593]: I0129 11:01:15.999094 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.035532 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q57bg\" (UniqueName: \"kubernetes.io/projected/43e8598d-f86e-425e-8418-bcfb93e3bd63-kube-api-access-q57bg\") pod \"openshift-config-operator-7777fb866f-g5zq7\" (UID: \"43e8598d-f86e-425e-8418-bcfb93e3bd63\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.035863 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.038484 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.057301 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.083423 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.096292 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.117313 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq7vb\" (UniqueName: \"kubernetes.io/projected/21b7f343-d887-4bdf-85c0-9639179e9c56-kube-api-access-mq7vb\") pod \"machine-approver-56656f9798-gl968\" (UID: \"21b7f343-d887-4bdf-85c0-9639179e9c56\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.117516 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.137643 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.157684 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.178458 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.198594 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.217535 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.237557 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.256647 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.277841 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.283467 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m9zzn"] Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.293405 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddabb0548_dbdb_438c_a98c_2eb6e2b2c0d9.slice/crio-ef13c43f67220a68a0302026a063a5119b05d414e0e4b778e47f86ed7a4f73d1 WatchSource:0}: Error finding container ef13c43f67220a68a0302026a063a5119b05d414e0e4b778e47f86ed7a4f73d1: Status 404 returned error can't find the container with id ef13c43f67220a68a0302026a063a5119b05d414e0e4b778e47f86ed7a4f73d1 Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.295301 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.297040 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.305611 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.318465 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.331190 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.338592 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.341548 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d100ddd_343c_48f6_ad0a_e08d3e23a904.slice/crio-430d411464bb34eb6bcacc91fa870f01ce66a61a74d961098d1c64c3a1da900d WatchSource:0}: Error finding container 430d411464bb34eb6bcacc91fa870f01ce66a61a74d961098d1c64c3a1da900d: Status 404 returned error can't find the container with id 430d411464bb34eb6bcacc91fa870f01ce66a61a74d961098d1c64c3a1da900d Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.350412 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.357772 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.371015 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21b7f343_d887_4bdf_85c0_9639179e9c56.slice/crio-6a7fec5b24991a80130c767d90052f8071d829bf02577def2e5028fca2a30758 WatchSource:0}: Error finding container 6a7fec5b24991a80130c767d90052f8071d829bf02577def2e5028fca2a30758: Status 404 returned error can't find the container with id 6a7fec5b24991a80130c767d90052f8071d829bf02577def2e5028fca2a30758 Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.381180 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.397787 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.417451 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.434535 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.440374 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.448798 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.456960 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.460964 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76a22425_a78d_4304_b158_f577c6ef4c4f.slice/crio-334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d WatchSource:0}: Error finding container 334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d: Status 404 returned error can't find the container with id 334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.477970 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.479151 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7"] Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.497620 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.518199 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: W0129 11:01:16.518878 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43e8598d_f86e_425e_8418_bcfb93e3bd63.slice/crio-b612bf39ff3fb29fdaefee7b832d03191002c86e3910e2c824c2f09ecd34a8e8 WatchSource:0}: Error finding container b612bf39ff3fb29fdaefee7b832d03191002c86e3910e2c824c2f09ecd34a8e8: Status 404 returned error can't find the container with id b612bf39ff3fb29fdaefee7b832d03191002c86e3910e2c824c2f09ecd34a8e8 Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.540292 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.568613 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.579116 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.597453 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.620093 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.637068 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.657403 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.697809 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.725135 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.727363 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" event={"ID":"10bf1dd7-30e3-48b9-9651-dcda2f63e89d","Type":"ContainerStarted","Data":"72c1981c91f3459f12949aa930bfc87fd00416da06ce2e5298707aa11ecf8106"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.728067 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" event={"ID":"10bf1dd7-30e3-48b9-9651-dcda2f63e89d","Type":"ContainerStarted","Data":"cb292b817086fa29bdd36ed2260478bb8f786f2e72ccac803988b117e65dd3ab"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.729320 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" event={"ID":"43e8598d-f86e-425e-8418-bcfb93e3bd63","Type":"ContainerStarted","Data":"b612bf39ff3fb29fdaefee7b832d03191002c86e3910e2c824c2f09ecd34a8e8"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.731182 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" event={"ID":"3d100ddd-343c-48f6-ad0a-e08d3e23a904","Type":"ContainerStarted","Data":"aca9e9c874775aaafe530b40cb5d5bbc4cb5873d4dcbdc4734f8788f6947a7cf"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.731212 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" event={"ID":"3d100ddd-343c-48f6-ad0a-e08d3e23a904","Type":"ContainerStarted","Data":"430d411464bb34eb6bcacc91fa870f01ce66a61a74d961098d1c64c3a1da900d"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.733689 4593 generic.go:334] "Generic (PLEG): container finished" podID="dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9" containerID="ea824cae612e38a73d8eebdcc401a4ebea50907fa6711e8a50aae46ac9a1cc2a" exitCode=0 Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.733750 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" event={"ID":"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9","Type":"ContainerDied","Data":"ea824cae612e38a73d8eebdcc401a4ebea50907fa6711e8a50aae46ac9a1cc2a"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.733768 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" event={"ID":"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9","Type":"ContainerStarted","Data":"ef13c43f67220a68a0302026a063a5119b05d414e0e4b778e47f86ed7a4f73d1"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.738499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" event={"ID":"a62104dd-d659-409a-b8f5-85aaf2856a14","Type":"ContainerStarted","Data":"acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.738535 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" event={"ID":"a62104dd-d659-409a-b8f5-85aaf2856a14","Type":"ContainerStarted","Data":"9eed55ee0a88f35fc2bf20b9123f7aae8a2cd1091b8b30b1223e2725c98e46d9"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.738779 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.739038 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.741967 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" event={"ID":"21b7f343-d887-4bdf-85c0-9639179e9c56","Type":"ContainerStarted","Data":"e2dc054b9821ef55d0dadbbf18c2f3d134fd990c3496cee804b35dab95a78762"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.742002 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" event={"ID":"21b7f343-d887-4bdf-85c0-9639179e9c56","Type":"ContainerStarted","Data":"6a7fec5b24991a80130c767d90052f8071d829bf02577def2e5028fca2a30758"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.742499 4593 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fnv5h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.742543 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.746082 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" event={"ID":"76a22425-a78d-4304-b158-f577c6ef4c4f","Type":"ContainerStarted","Data":"9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.746146 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" event={"ID":"76a22425-a78d-4304-b158-f577c6ef4c4f","Type":"ContainerStarted","Data":"334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d"} Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.746354 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.747182 4593 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9td98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.747233 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.762016 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.777710 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.797335 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.817470 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.838645 4593 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.858037 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.877506 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.898040 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.918483 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.937353 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962798 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-stats-auth\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962881 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962901 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-client\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962916 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.962953 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vpzz\" (UniqueName: \"kubernetes.io/projected/1c91d49f-a382-4279-91c7-a43b3f1e071e-kube-api-access-2vpzz\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963000 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/51f11901-9a27-4368-9e6d-9ae05692c942-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963033 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963068 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-images\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963151 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-config\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963803 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9pp\" (UniqueName: \"kubernetes.io/projected/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-kube-api-access-bk9pp\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.963971 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661d5765-a5d7-4cb4-87b9-284f36dc385e-serving-cert\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964048 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964069 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8246045d-6937-4d02-b488-24bcf2eec4bf-serving-cert\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964102 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-metrics-certs\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964118 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964142 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964159 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t2jr\" (UniqueName: \"kubernetes.io/projected/fa5b3597-636e-4cf0-ad99-755378e23867-kube-api-access-5t2jr\") pod \"downloads-7954f5f757-t7wn4\" (UID: \"fa5b3597-636e-4cf0-ad99-755378e23867\") " pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964175 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-service-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964195 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-service-ca-bundle\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964213 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n97j8\" (UniqueName: \"kubernetes.io/projected/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-kube-api-access-n97j8\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964276 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964318 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915745e3-1528-4d5f-84a6-001471123924-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964387 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51f11901-9a27-4368-9e6d-9ae05692c942-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964489 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-images\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964519 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964546 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-default-certificate\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964583 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964610 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964627 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915745e3-1528-4d5f-84a6-001471123924-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964656 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bd78b0-707f-4422-8b39-bd751a8cdcd6-config\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964675 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/478971f0-c97c-4eb1-86d2-50af06b8aafc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.964732 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965061 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965091 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965134 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965152 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965169 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965182 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-client\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965201 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-proxy-tls\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965227 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965242 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965256 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965270 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02bd78b0-707f-4422-8b39-bd751a8cdcd6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965299 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965313 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965327 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7r8\" (UniqueName: \"kubernetes.io/projected/bb259eac-6aa7-42d9-883b-2af6b63af4b8-kube-api-access-5x7r8\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965341 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrvc\" (UniqueName: \"kubernetes.io/projected/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-kube-api-access-lqrvc\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965354 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/478971f0-c97c-4eb1-86d2-50af06b8aafc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965392 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965409 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7lr9\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-kube-api-access-r7lr9\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.965988 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c91d49f-a382-4279-91c7-a43b3f1e071e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966008 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2697\" (UniqueName: \"kubernetes.io/projected/8246045d-6937-4d02-b488-24bcf2eec4bf-kube-api-access-l2697\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966068 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966308 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966354 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbmg4\" (UniqueName: \"kubernetes.io/projected/661d5765-a5d7-4cb4-87b9-284f36dc385e-kube-api-access-fbmg4\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966377 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-encryption-config\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966441 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vg54\" (UniqueName: \"kubernetes.io/projected/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-kube-api-access-4vg54\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966459 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzzp\" (UniqueName: \"kubernetes.io/projected/edf60cff-ba6c-450f-bcec-7b14d7513120-kube-api-access-7lzzp\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966494 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966597 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02bd78b0-707f-4422-8b39-bd751a8cdcd6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966622 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2plf\" (UniqueName: \"kubernetes.io/projected/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-kube-api-access-t2plf\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966767 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966786 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966954 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.966999 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c91d49f-a382-4279-91c7-a43b3f1e071e-proxy-tls\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967083 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-policies\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967110 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967134 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967164 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb259eac-6aa7-42d9-883b-2af6b63af4b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf60cff-ba6c-450f-bcec-7b14d7513120-metrics-tls\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967329 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-serving-cert\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967367 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-config\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967394 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/478971f0-c97c-4eb1-86d2-50af06b8aafc-config\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967418 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-config\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967442 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967464 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967513 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967545 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-trusted-ca\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967571 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-config\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967610 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967698 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.967739 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:16 crc kubenswrapper[4593]: E0129 11:01:16.967887 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.467874326 +0000 UTC m=+143.340908517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.968043 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-serving-cert\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.968098 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915745e3-1528-4d5f-84a6-001471123924-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:16 crc kubenswrapper[4593]: I0129 11:01:16.968124 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-dir\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069361 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.069531 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.569503838 +0000 UTC m=+143.442538039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069667 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069702 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069729 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069752 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069780 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069810 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069835 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-client\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069859 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-proxy-tls\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069885 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-profile-collector-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069908 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/59084a0c-807b-47c9-b905-6e07817bcb89-tmpfs\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069937 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25ddj\" (UniqueName: \"kubernetes.io/projected/fae65f9f-a5ea-442a-8c78-aa650d330c4d-kube-api-access-25ddj\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069972 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.069996 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070731 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bce548b-2c64-4ac5-a797-979de4cf7656-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070769 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070794 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02bd78b0-707f-4422-8b39-bd751a8cdcd6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070819 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-cabundle\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070851 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070875 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e50d23-1adc-4462-9424-1d2157c2ff93-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070901 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070928 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x7r8\" (UniqueName: \"kubernetes.io/projected/bb259eac-6aa7-42d9-883b-2af6b63af4b8-kube-api-access-5x7r8\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070954 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrvc\" (UniqueName: \"kubernetes.io/projected/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-kube-api-access-lqrvc\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.070979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/478971f0-c97c-4eb1-86d2-50af06b8aafc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071002 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071043 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071070 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071092 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-plugins-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071121 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7lr9\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-kube-api-access-r7lr9\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071151 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqqbr\" (UniqueName: \"kubernetes.io/projected/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-kube-api-access-hqqbr\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071177 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c91d49f-a382-4279-91c7-a43b3f1e071e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072552 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2697\" (UniqueName: \"kubernetes.io/projected/8246045d-6937-4d02-b488-24bcf2eec4bf-kube-api-access-l2697\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072595 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2zw\" (UniqueName: \"kubernetes.io/projected/65e50d23-1adc-4462-9424-1d2157c2ff93-kube-api-access-ws2zw\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072627 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072666 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072692 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlw88\" (UniqueName: \"kubernetes.io/projected/719f2fcb-45e2-4600-82d9-fbf4263201a2-kube-api-access-rlw88\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072718 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbmg4\" (UniqueName: \"kubernetes.io/projected/661d5765-a5d7-4cb4-87b9-284f36dc385e-kube-api-access-fbmg4\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072743 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-encryption-config\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072771 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vg54\" (UniqueName: \"kubernetes.io/projected/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-kube-api-access-4vg54\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072793 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lzzp\" (UniqueName: \"kubernetes.io/projected/edf60cff-ba6c-450f-bcec-7b14d7513120-kube-api-access-7lzzp\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072842 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-webhook-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072865 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-socket-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072896 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072919 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072944 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072971 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p88cj\" (UniqueName: \"kubernetes.io/projected/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-kube-api-access-p88cj\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072998 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7651ef0-a985-4314-a20a-7103624a257a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073022 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-srv-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073052 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02bd78b0-707f-4422-8b39-bd751a8cdcd6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073076 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4n58\" (UniqueName: \"kubernetes.io/projected/59084a0c-807b-47c9-b905-6e07817bcb89-kube-api-access-k4n58\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073102 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2plf\" (UniqueName: \"kubernetes.io/projected/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-kube-api-access-t2plf\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073124 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-certs\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073144 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxwx5\" (UniqueName: \"kubernetes.io/projected/bf0241bd-f637-4b8b-b78a-797549fe5da9-kube-api-access-hxwx5\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073156 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073168 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073224 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073253 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073276 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-registration-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073306 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073329 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-metrics-tls\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073350 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073398 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c91d49f-a382-4279-91c7-a43b3f1e071e-proxy-tls\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073420 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-policies\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073440 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073466 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2cvq\" (UniqueName: \"kubernetes.io/projected/c5d626cc-ab7a-408c-9955-c3fc676a799b-kube-api-access-z2cvq\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073489 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-srv-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073510 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-key\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073534 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvm7v\" (UniqueName: \"kubernetes.io/projected/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-kube-api-access-cvm7v\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.073609 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.074366 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb259eac-6aa7-42d9-883b-2af6b63af4b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.074405 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf60cff-ba6c-450f-bcec-7b14d7513120-metrics-tls\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.074425 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.075265 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.077892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-policies\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.078672 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.071319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.079463 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072242 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.072185 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.079960 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.080002 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-etcd-client\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.080275 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.080396 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1c91d49f-a382-4279-91c7-a43b3f1e071e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.080844 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.082342 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.074428 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-serving-cert\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083071 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083279 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-cert\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083303 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-node-bootstrap-token\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-config\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083359 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-config\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083401 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083419 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083436 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/478971f0-c97c-4eb1-86d2-50af06b8aafc-config\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.083453 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fae65f9f-a5ea-442a-8c78-aa650d330c4d-serving-cert\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.084440 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.084815 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-config\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.084944 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-encryption-config\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.084998 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.085045 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.085245 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.585234117 +0000 UTC m=+143.458268308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095213 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-trusted-ca\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095268 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-config\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095288 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7651ef0-a985-4314-a20a-7103624a257a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095304 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095334 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-csi-data-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095353 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095368 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-apiservice-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095389 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095406 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbjn6\" (UniqueName: \"kubernetes.io/projected/58e36a23-974a-4afd-b226-bb194d489cf0-kube-api-access-vbjn6\") pod \"migrator-59844c95c7-8b552\" (UID: \"58e36a23-974a-4afd-b226-bb194d489cf0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095421 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae65f9f-a5ea-442a-8c78-aa650d330c4d-config\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095444 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095466 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd7cw\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-kube-api-access-dd7cw\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095480 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-serving-cert\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095494 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095519 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915745e3-1528-4d5f-84a6-001471123924-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095534 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-dir\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095550 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-stats-auth\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095565 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nksd\" (UniqueName: \"kubernetes.io/projected/9bce548b-2c64-4ac5-a797-979de4cf7656-kube-api-access-2nksd\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095579 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e50d23-1adc-4462-9424-1d2157c2ff93-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095597 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.095612 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-client\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098307 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098337 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vpzz\" (UniqueName: \"kubernetes.io/projected/1c91d49f-a382-4279-91c7-a43b3f1e071e-kube-api-access-2vpzz\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098359 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/6910728e-feba-4826-8447-11f4cf860c30-kube-api-access-g5tcj\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098381 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/51f11901-9a27-4368-9e6d-9ae05692c942-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098402 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098419 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-images\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.090610 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/edf60cff-ba6c-450f-bcec-7b14d7513120-metrics-tls\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099661 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-config\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099692 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9pp\" (UniqueName: \"kubernetes.io/projected/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-kube-api-access-bk9pp\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099710 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661d5765-a5d7-4cb4-87b9-284f36dc385e-serving-cert\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099729 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099750 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8246045d-6937-4d02-b488-24bcf2eec4bf-serving-cert\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099766 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-mountpoint-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099784 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdr5\" (UniqueName: \"kubernetes.io/projected/e9136490-ddbf-4318-91c6-e73d74e7b599-kube-api-access-vvdr5\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099803 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099822 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t2jr\" (UniqueName: \"kubernetes.io/projected/fa5b3597-636e-4cf0-ad99-755378e23867-kube-api-access-5t2jr\") pod \"downloads-7954f5f757-t7wn4\" (UID: \"fa5b3597-636e-4cf0-ad99-755378e23867\") " pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099821 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1c91d49f-a382-4279-91c7-a43b3f1e071e-proxy-tls\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099837 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-service-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.099989 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-metrics-certs\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100015 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100036 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/719f2fcb-45e2-4600-82d9-fbf4263201a2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100162 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915745e3-1528-4d5f-84a6-001471123924-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100233 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-service-ca-bundle\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.097950 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bb259eac-6aa7-42d9-883b-2af6b63af4b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.085457 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/478971f0-c97c-4eb1-86d2-50af06b8aafc-config\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100475 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-service-ca-bundle\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100506 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n97j8\" (UniqueName: \"kubernetes.io/projected/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-kube-api-access-n97j8\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.098273 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02bd78b0-707f-4422-8b39-bd751a8cdcd6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100641 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51f11901-9a27-4368-9e6d-9ae05692c942-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100665 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-images\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100792 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-default-certificate\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100837 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100921 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-serving-cert\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.100956 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.091893 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.094117 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.101826 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95l2\" (UniqueName: \"kubernetes.io/projected/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-kube-api-access-c95l2\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.101857 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.101983 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915745e3-1528-4d5f-84a6-001471123924-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.102005 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bd78b0-707f-4422-8b39-bd751a8cdcd6-config\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.102130 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-proxy-tls\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.102157 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/478971f0-c97c-4eb1-86d2-50af06b8aafc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.102199 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-config-volume\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.101676 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/478971f0-c97c-4eb1-86d2-50af06b8aafc-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.103321 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.104150 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02bd78b0-707f-4422-8b39-bd751a8cdcd6-config\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.104711 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-service-ca\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.106941 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.106993 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/915745e3-1528-4d5f-84a6-001471123924-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.107044 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-audit-dir\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.107510 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51f11901-9a27-4368-9e6d-9ae05692c942-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.107680 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.108606 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-config\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.110131 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.110309 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-service-ca-bundle\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.110471 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-images\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.110819 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.111400 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-trusted-ca\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.111423 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bb259eac-6aa7-42d9-883b-2af6b63af4b8-images\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.111741 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.112069 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/661d5765-a5d7-4cb4-87b9-284f36dc385e-config\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.112083 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8246045d-6937-4d02-b488-24bcf2eec4bf-config\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.112608 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.114802 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.114860 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x7r8\" (UniqueName: \"kubernetes.io/projected/bb259eac-6aa7-42d9-883b-2af6b63af4b8-kube-api-access-5x7r8\") pod \"machine-api-operator-5694c8668f-vtdww\" (UID: \"bb259eac-6aa7-42d9-883b-2af6b63af4b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.114998 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/661d5765-a5d7-4cb4-87b9-284f36dc385e-serving-cert\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.115096 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.115187 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-stats-auth\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.118943 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119196 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-serving-cert\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119358 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8246045d-6937-4d02-b488-24bcf2eec4bf-serving-cert\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119466 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119814 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/915745e3-1528-4d5f-84a6-001471123924-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.119993 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.120127 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/51f11901-9a27-4368-9e6d-9ae05692c942-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.120499 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.120543 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-metrics-certs\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.122239 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-etcd-client\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.122645 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-default-certificate\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.132583 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") pod \"console-f9d7485db-8425v\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.151775 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02bd78b0-707f-4422-8b39-bd751a8cdcd6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-9cr59\" (UID: \"02bd78b0-707f-4422-8b39-bd751a8cdcd6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.171090 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.175931 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7lr9\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-kube-api-access-r7lr9\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.193227 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2697\" (UniqueName: \"kubernetes.io/projected/8246045d-6937-4d02-b488-24bcf2eec4bf-kube-api-access-l2697\") pod \"authentication-operator-69f744f599-gz9wd\" (UID: \"8246045d-6937-4d02-b488-24bcf2eec4bf\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205335 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.205530 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.70549688 +0000 UTC m=+143.578531071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205592 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbjn6\" (UniqueName: \"kubernetes.io/projected/58e36a23-974a-4afd-b226-bb194d489cf0-kube-api-access-vbjn6\") pod \"migrator-59844c95c7-8b552\" (UID: \"58e36a23-974a-4afd-b226-bb194d489cf0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205660 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae65f9f-a5ea-442a-8c78-aa650d330c4d-config\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205684 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd7cw\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-kube-api-access-dd7cw\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205702 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205732 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nksd\" (UniqueName: \"kubernetes.io/projected/9bce548b-2c64-4ac5-a797-979de4cf7656-kube-api-access-2nksd\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205753 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e50d23-1adc-4462-9424-1d2157c2ff93-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205784 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/6910728e-feba-4826-8447-11f4cf860c30-kube-api-access-g5tcj\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205837 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-mountpoint-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205868 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdr5\" (UniqueName: \"kubernetes.io/projected/e9136490-ddbf-4318-91c6-e73d74e7b599-kube-api-access-vvdr5\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205897 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/719f2fcb-45e2-4600-82d9-fbf4263201a2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205948 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c95l2\" (UniqueName: \"kubernetes.io/projected/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-kube-api-access-c95l2\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-config-volume\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.205996 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206017 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-profile-collector-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/59084a0c-807b-47c9-b905-6e07817bcb89-tmpfs\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25ddj\" (UniqueName: \"kubernetes.io/projected/fae65f9f-a5ea-442a-8c78-aa650d330c4d-kube-api-access-25ddj\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206090 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bce548b-2c64-4ac5-a797-979de4cf7656-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206112 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-cabundle\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206503 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae65f9f-a5ea-442a-8c78-aa650d330c4d-config\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.206914 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-config-volume\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.207028 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.207076 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/59084a0c-807b-47c9-b905-6e07817bcb89-tmpfs\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.207264 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-mountpoint-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208016 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-cabundle\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208080 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e50d23-1adc-4462-9424-1d2157c2ff93-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208375 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208413 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208430 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-plugins-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208463 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqqbr\" (UniqueName: \"kubernetes.io/projected/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-kube-api-access-hqqbr\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208482 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2zw\" (UniqueName: \"kubernetes.io/projected/65e50d23-1adc-4462-9424-1d2157c2ff93-kube-api-access-ws2zw\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208538 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlw88\" (UniqueName: \"kubernetes.io/projected/719f2fcb-45e2-4600-82d9-fbf4263201a2-kube-api-access-rlw88\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208568 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208591 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-webhook-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208610 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-socket-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208649 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208672 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p88cj\" (UniqueName: \"kubernetes.io/projected/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-kube-api-access-p88cj\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208695 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7651ef0-a985-4314-a20a-7103624a257a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208709 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-srv-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208726 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4n58\" (UniqueName: \"kubernetes.io/projected/59084a0c-807b-47c9-b905-6e07817bcb89-kube-api-access-k4n58\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208746 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-certs\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208761 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxwx5\" (UniqueName: \"kubernetes.io/projected/bf0241bd-f637-4b8b-b78a-797549fe5da9-kube-api-access-hxwx5\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208778 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208792 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-registration-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208809 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-metrics-tls\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208852 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2cvq\" (UniqueName: \"kubernetes.io/projected/c5d626cc-ab7a-408c-9955-c3fc676a799b-kube-api-access-z2cvq\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208869 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-srv-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208884 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-key\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208903 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvm7v\" (UniqueName: \"kubernetes.io/projected/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-kube-api-access-cvm7v\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208920 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-cert\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208934 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-node-bootstrap-token\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208958 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208974 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fae65f9f-a5ea-442a-8c78-aa650d330c4d-serving-cert\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.208991 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7651ef0-a985-4314-a20a-7103624a257a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.209126 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.209149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-csi-data-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.209178 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-apiservice-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.210133 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/9bce548b-2c64-4ac5-a797-979de4cf7656-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.210236 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e50d23-1adc-4462-9424-1d2157c2ff93-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.210521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-socket-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.211585 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-apiservice-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.211723 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-registration-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.212782 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.212920 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/719f2fcb-45e2-4600-82d9-fbf4263201a2-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.213287 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.213800 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65e50d23-1adc-4462-9424-1d2157c2ff93-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.212790 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-profile-collector-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.213958 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-plugins-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.214254 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") pod \"oauth-openshift-558db77b4-ftchp\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.214280 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.214930 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-metrics-tls\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.216074 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7651ef0-a985-4314-a20a-7103624a257a-trusted-ca\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.216129 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.216298 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e9136490-ddbf-4318-91c6-e73d74e7b599-csi-data-dir\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.217113 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-srv-cert\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.217201 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fae65f9f-a5ea-442a-8c78-aa650d330c4d-serving-cert\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.217678 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.71766228 +0000 UTC m=+143.590696531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.219620 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-certs\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.219932 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-cert\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.220246 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.220456 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bf0241bd-f637-4b8b-b78a-797549fe5da9-node-bootstrap-token\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.220912 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e7651ef0-a985-4314-a20a-7103624a257a-metrics-tls\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.221299 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/59084a0c-807b-47c9-b905-6e07817bcb89-webhook-cert\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.221402 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6910728e-feba-4826-8447-11f4cf860c30-srv-cert\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.221672 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.223751 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c5d626cc-ab7a-408c-9955-c3fc676a799b-signing-key\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.238029 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbmg4\" (UniqueName: \"kubernetes.io/projected/661d5765-a5d7-4cb4-87b9-284f36dc385e-kube-api-access-fbmg4\") pod \"console-operator-58897d9998-fm7cc\" (UID: \"661d5765-a5d7-4cb4-87b9-284f36dc385e\") " pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.259321 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vg54\" (UniqueName: \"kubernetes.io/projected/9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc-kube-api-access-4vg54\") pod \"router-default-5444994796-xx52v\" (UID: \"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc\") " pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.275729 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lzzp\" (UniqueName: \"kubernetes.io/projected/edf60cff-ba6c-450f-bcec-7b14d7513120-kube-api-access-7lzzp\") pod \"dns-operator-744455d44c-l64wd\" (UID: \"edf60cff-ba6c-450f-bcec-7b14d7513120\") " pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.277795 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.300205 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.310237 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.310613 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.810583258 +0000 UTC m=+143.683617459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.311032 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.311304 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrvc\" (UniqueName: \"kubernetes.io/projected/f0ee22f5-d5c3-4686-ab5d-53223d05bef6-kube-api-access-lqrvc\") pod \"apiserver-7bbb656c7d-djdmx\" (UID: \"f0ee22f5-d5c3-4686-ab5d-53223d05bef6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.311451 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.811435922 +0000 UTC m=+143.684470113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.312660 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2plf\" (UniqueName: \"kubernetes.io/projected/8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50-kube-api-access-t2plf\") pod \"etcd-operator-b45778765-j7hr6\" (UID: \"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.350360 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.369954 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51f11901-9a27-4368-9e6d-9ae05692c942-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ldr8c\" (UID: \"51f11901-9a27-4368-9e6d-9ae05692c942\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.407052 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vpzz\" (UniqueName: \"kubernetes.io/projected/1c91d49f-a382-4279-91c7-a43b3f1e071e-kube-api-access-2vpzz\") pod \"machine-config-controller-84d6567774-lrstj\" (UID: \"1c91d49f-a382-4279-91c7-a43b3f1e071e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.412003 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.412443 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:17.912424546 +0000 UTC m=+143.785458737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.412557 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.425104 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.431435 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915745e3-1528-4d5f-84a6-001471123924-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ct922\" (UID: \"915745e3-1528-4d5f-84a6-001471123924\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.436616 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.442503 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n97j8\" (UniqueName: \"kubernetes.io/projected/dc1056e0-74e9-4be8-bcdf-92604e23a2e1-kube-api-access-n97j8\") pod \"machine-config-operator-74547568cd-qjbwn\" (UID: \"dc1056e0-74e9-4be8-bcdf-92604e23a2e1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.448523 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.454879 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/478971f0-c97c-4eb1-86d2-50af06b8aafc-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-gmw8k\" (UID: \"478971f0-c97c-4eb1-86d2-50af06b8aafc\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.506909 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9pp\" (UniqueName: \"kubernetes.io/projected/5d8acfc6-0334-4294-8dd6-c3091ebb69d3-kube-api-access-bk9pp\") pod \"cluster-samples-operator-665b6dd947-6dlwj\" (UID: \"5d8acfc6-0334-4294-8dd6-c3091ebb69d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.513460 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.513829 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.01378017 +0000 UTC m=+143.886814361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.515442 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.526332 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.528311 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t2jr\" (UniqueName: \"kubernetes.io/projected/fa5b3597-636e-4cf0-ad99-755378e23867-kube-api-access-5t2jr\") pod \"downloads-7954f5f757-t7wn4\" (UID: \"fa5b3597-636e-4cf0-ad99-755378e23867\") " pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.546068 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.554610 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.558221 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nksd\" (UniqueName: \"kubernetes.io/projected/9bce548b-2c64-4ac5-a797-979de4cf7656-kube-api-access-2nksd\") pod \"control-plane-machine-set-operator-78cbb6b69f-pf5p2\" (UID: \"9bce548b-2c64-4ac5-a797-979de4cf7656\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.564045 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.565761 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.582377 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbjn6\" (UniqueName: \"kubernetes.io/projected/58e36a23-974a-4afd-b226-bb194d489cf0-kube-api-access-vbjn6\") pod \"migrator-59844c95c7-8b552\" (UID: \"58e36a23-974a-4afd-b226-bb194d489cf0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.583964 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.584026 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vtdww"] Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.585517 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd7cw\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-kube-api-access-dd7cw\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.590943 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.606327 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.624252 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tcj\" (UniqueName: \"kubernetes.io/projected/6910728e-feba-4826-8447-11f4cf860c30-kube-api-access-g5tcj\") pod \"olm-operator-6b444d44fb-g9wvz\" (UID: \"6910728e-feba-4826-8447-11f4cf860c30\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.628003 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c95l2\" (UniqueName: \"kubernetes.io/projected/28ad6acc-fb5e-4d71-9f36-492c3b1262d2-kube-api-access-c95l2\") pod \"catalog-operator-68c6474976-vlh9s\" (UID: \"28ad6acc-fb5e-4d71-9f36-492c3b1262d2\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.628376 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.629930 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.630303 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.630614 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.130583375 +0000 UTC m=+144.003617566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.639056 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59"] Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.641615 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.667610 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25ddj\" (UniqueName: \"kubernetes.io/projected/fae65f9f-a5ea-442a-8c78-aa650d330c4d-kube-api-access-25ddj\") pod \"service-ca-operator-777779d784-rpfbq\" (UID: \"fae65f9f-a5ea-442a-8c78-aa650d330c4d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.671284 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdr5\" (UniqueName: \"kubernetes.io/projected/e9136490-ddbf-4318-91c6-e73d74e7b599-kube-api-access-vvdr5\") pod \"csi-hostpathplugin-zv27c\" (UID: \"e9136490-ddbf-4318-91c6-e73d74e7b599\") " pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.680712 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxwx5\" (UniqueName: \"kubernetes.io/projected/bf0241bd-f637-4b8b-b78a-797549fe5da9-kube-api-access-hxwx5\") pod \"machine-config-server-vbsqg\" (UID: \"bf0241bd-f637-4b8b-b78a-797549fe5da9\") " pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.687071 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.707359 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.726405 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7651ef0-a985-4314-a20a-7103624a257a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-vdt9h\" (UID: \"e7651ef0-a985-4314-a20a-7103624a257a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.732023 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqqbr\" (UniqueName: \"kubernetes.io/projected/0a7ffb2d-39e9-426f-9364-ebe193a5adc8-kube-api-access-hqqbr\") pod \"dns-default-29j27\" (UID: \"0a7ffb2d-39e9-426f-9364-ebe193a5adc8\") " pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.736857 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.737503 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.237485575 +0000 UTC m=+144.110519776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.739426 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vbsqg" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.753049 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-29j27" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.766120 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws2zw\" (UniqueName: \"kubernetes.io/projected/65e50d23-1adc-4462-9424-1d2157c2ff93-kube-api-access-ws2zw\") pod \"kube-storage-version-migrator-operator-b67b599dd-c8vv4\" (UID: \"65e50d23-1adc-4462-9424-1d2157c2ff93\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.766385 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.766393 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.767343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlw88\" (UniqueName: \"kubernetes.io/projected/719f2fcb-45e2-4600-82d9-fbf4263201a2-kube-api-access-rlw88\") pod \"package-server-manager-789f6589d5-m8dfr\" (UID: \"719f2fcb-45e2-4600-82d9-fbf4263201a2\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.796568 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4n58\" (UniqueName: \"kubernetes.io/projected/59084a0c-807b-47c9-b905-6e07817bcb89-kube-api-access-k4n58\") pod \"packageserver-d55dfcdfc-zpjgp\" (UID: \"59084a0c-807b-47c9-b905-6e07817bcb89\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.806306 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.813650 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") pod \"marketplace-operator-79b997595-hw52m\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.816859 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" event={"ID":"21b7f343-d887-4bdf-85c0-9639179e9c56","Type":"ContainerStarted","Data":"3b9102c29ded7f3b1489c588a4b593d3cebe14bc8fa2ee108915c50f56d9c663"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.837880 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p88cj\" (UniqueName: \"kubernetes.io/projected/77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd-kube-api-access-p88cj\") pod \"ingress-canary-jnw9r\" (UID: \"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd\") " pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.838621 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.840784 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.340764562 +0000 UTC m=+144.213798763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.855124 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.855648 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.355620628 +0000 UTC m=+144.228654819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.872895 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") pod \"collect-profiles-29494740-bkdhm\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.875777 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xx52v" event={"ID":"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc","Type":"ContainerStarted","Data":"5318c72dab4e60db769bd489cccc03cce121501c49e9c505d3cbc034a7383dd0"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.875823 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xx52v" event={"ID":"9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc","Type":"ContainerStarted","Data":"dc32442090514fd507db2550fc7ca88aa73610ee15acc127f9a2ee87dfa40516"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.876919 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvm7v\" (UniqueName: \"kubernetes.io/projected/8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc-kube-api-access-cvm7v\") pod \"multus-admission-controller-857f4d67dd-rnn8b\" (UID: \"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.887949 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2cvq\" (UniqueName: \"kubernetes.io/projected/c5d626cc-ab7a-408c-9955-c3fc676a799b-kube-api-access-z2cvq\") pod \"service-ca-9c57cc56f-96whs\" (UID: \"c5d626cc-ab7a-408c-9955-c3fc676a799b\") " pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.890591 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" event={"ID":"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9","Type":"ContainerStarted","Data":"ed50f82eb21665ad0890e00283aeb85786484b14c6fef7e831ff132d86d798cc"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.890651 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" event={"ID":"dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9","Type":"ContainerStarted","Data":"b63d0af04b2f51a2972545516629f3571ef5538eed8c38c76235e7ce0ea2c411"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.920105 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.933172 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" event={"ID":"02bd78b0-707f-4422-8b39-bd751a8cdcd6","Type":"ContainerStarted","Data":"8cc34a9f01e6a31bd34bf1aad0256d9170eb730a022e8dc844968e80f0f4d1d1"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.937244 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" event={"ID":"bb259eac-6aa7-42d9-883b-2af6b63af4b8","Type":"ContainerStarted","Data":"3d3c29b8d7af237ec93e0cca6239f6206a877a189af80d2749e29b6cadc9b4b0"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.950601 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.953032 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.955849 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.959399 4593 generic.go:334] "Generic (PLEG): container finished" podID="43e8598d-f86e-425e-8418-bcfb93e3bd63" containerID="e837e36ad5d7e8a69016f9ffac8611b74ac4184f83d4fdd3d146af3a3120a4ce" exitCode=0 Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.959471 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" event={"ID":"43e8598d-f86e-425e-8418-bcfb93e3bd63","Type":"ContainerDied","Data":"e837e36ad5d7e8a69016f9ffac8611b74ac4184f83d4fdd3d146af3a3120a4ce"} Jan 29 11:01:17 crc kubenswrapper[4593]: E0129 11:01:17.959536 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.459521943 +0000 UTC m=+144.332556134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.971088 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.974972 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.981173 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8425v" event={"ID":"ccb12507-4eef-467d-885d-982c68807bda","Type":"ContainerStarted","Data":"b2d3338b1514b5c7e9256324d64b1f803fa4ccbc8cc1a14cc26386a3d7708bb8"} Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.981473 4593 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9td98 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.981498 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 29 11:01:17 crc kubenswrapper[4593]: I0129 11:01:17.989669 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.013235 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.019032 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.032747 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.057589 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.066496 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.566477564 +0000 UTC m=+144.439511815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.082392 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jnw9r" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.162282 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.162692 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.662674343 +0000 UTC m=+144.535708534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.260804 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.264506 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.264834 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.76482238 +0000 UTC m=+144.637856571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.304128 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.313684 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.322736 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.322797 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.366089 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.366409 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.86639428 +0000 UTC m=+144.739428471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.409960 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" podStartSLOduration=122.409940418 podStartE2EDuration="2m2.409940418s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:18.407045807 +0000 UTC m=+144.280079998" watchObservedRunningTime="2026-01-29 11:01:18.409940418 +0000 UTC m=+144.282974609" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.452848 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xx52v" podStartSLOduration=122.452829147 podStartE2EDuration="2m2.452829147s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:18.438418944 +0000 UTC m=+144.311453135" watchObservedRunningTime="2026-01-29 11:01:18.452829147 +0000 UTC m=+144.325863338" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.453419 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-fm7cc"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.468997 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.469554 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:18.969538365 +0000 UTC m=+144.842572556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: W0129 11:01:18.488734 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode544204e_7186_4a22_a6bf_79a5101af4b6.slice/crio-0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64 WatchSource:0}: Error finding container 0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64: Status 404 returned error can't find the container with id 0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64 Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.518009 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx"] Jan 29 11:01:18 crc kubenswrapper[4593]: W0129 11:01:18.519182 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf0241bd_f637_4b8b_b78a_797549fe5da9.slice/crio-cc948b03dc5861fcf1adda897a33fd0c08a2d15a82e993373c8ea7bd3d78a2b8 WatchSource:0}: Error finding container cc948b03dc5861fcf1adda897a33fd0c08a2d15a82e993373c8ea7bd3d78a2b8: Status 404 returned error can't find the container with id cc948b03dc5861fcf1adda897a33fd0c08a2d15a82e993373c8ea7bd3d78a2b8 Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.536578 4593 csr.go:261] certificate signing request csr-gwdhb is approved, waiting to be issued Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.545467 4593 csr.go:257] certificate signing request csr-gwdhb is issued Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.546859 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j7hr6"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.579090 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.579440 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.079423847 +0000 UTC m=+144.952458038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.594059 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" podStartSLOduration=122.594043425 podStartE2EDuration="2m2.594043425s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:18.560449076 +0000 UTC m=+144.433483267" watchObservedRunningTime="2026-01-29 11:01:18.594043425 +0000 UTC m=+144.467077606" Jan 29 11:01:18 crc kubenswrapper[4593]: W0129 11:01:18.598851 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51f11901_9a27_4368_9e6d_9ae05692c942.slice/crio-e14737acfefe545c91d700d01b1615a6ac33df9f296aba9ce0bd95f1608bda2f WatchSource:0}: Error finding container e14737acfefe545c91d700d01b1615a6ac33df9f296aba9ce0bd95f1608bda2f: Status 404 returned error can't find the container with id e14737acfefe545c91d700d01b1615a6ac33df9f296aba9ce0bd95f1608bda2f Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.680097 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.682079 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.682504 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.182491739 +0000 UTC m=+145.055525930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: W0129 11:01:18.731723 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d8d97d7_c0b0_4b84_90a2_42e4c49f9d50.slice/crio-927a09ca9372efb96eca4614820ae2506ca04717e577b1311b75d1ad189f9b1f WatchSource:0}: Error finding container 927a09ca9372efb96eca4614820ae2506ca04717e577b1311b75d1ad189f9b1f: Status 404 returned error can't find the container with id 927a09ca9372efb96eca4614820ae2506ca04717e577b1311b75d1ad189f9b1f Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.784271 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.784759 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.284733648 +0000 UTC m=+145.157767869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.791038 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n4s5k" podStartSLOduration=122.791018833 podStartE2EDuration="2m2.791018833s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:18.772358832 +0000 UTC m=+144.645393013" watchObservedRunningTime="2026-01-29 11:01:18.791018833 +0000 UTC m=+144.664053024" Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.802231 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-l64wd"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.846037 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gz9wd"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.848511 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.858346 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.886353 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.886848 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.386832403 +0000 UTC m=+145.259866594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.929531 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2"] Jan 29 11:01:18 crc kubenswrapper[4593]: I0129 11:01:18.993215 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:18 crc kubenswrapper[4593]: E0129 11:01:18.993728 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.493708591 +0000 UTC m=+145.366742782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: W0129 11:01:19.052179 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8246045d_6937_4d02_b488_24bcf2eec4bf.slice/crio-e06aae70c4e861c81c7cd4182c8eef519e279fc1658ede262cd444363e159ecc WatchSource:0}: Error finding container e06aae70c4e861c81c7cd4182c8eef519e279fc1658ede262cd444363e159ecc: Status 404 returned error can't find the container with id e06aae70c4e861c81c7cd4182c8eef519e279fc1658ede262cd444363e159ecc Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.094544 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.095096 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.595079085 +0000 UTC m=+145.468113286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.128881 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" event={"ID":"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50","Type":"ContainerStarted","Data":"927a09ca9372efb96eca4614820ae2506ca04717e577b1311b75d1ad189f9b1f"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.143215 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5gd58" podStartSLOduration=123.143199271 podStartE2EDuration="2m3.143199271s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:19.127620995 +0000 UTC m=+145.000655196" watchObservedRunningTime="2026-01-29 11:01:19.143199271 +0000 UTC m=+145.016233462" Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.192994 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" event={"ID":"f0ee22f5-d5c3-4686-ab5d-53223d05bef6","Type":"ContainerStarted","Data":"57615d8c750f59fb2bc9b3523ad3ef2bc11b07e4737982f42eb88c8e6632c6dd"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.195778 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.196158 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.69614316 +0000 UTC m=+145.569177351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.247156 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" event={"ID":"661d5765-a5d7-4cb4-87b9-284f36dc385e","Type":"ContainerStarted","Data":"632716971daf9c9bb8743ed272d65cb7d1924ec899b8897d893f85f1a7895f47"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.249045 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" event={"ID":"e544204e-7186-4a22-a6bf-79a5101af4b6","Type":"ContainerStarted","Data":"0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.250266 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" event={"ID":"51f11901-9a27-4368-9e6d-9ae05692c942","Type":"ContainerStarted","Data":"e14737acfefe545c91d700d01b1615a6ac33df9f296aba9ce0bd95f1608bda2f"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.302108 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.302397 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.802386511 +0000 UTC m=+145.675420702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.320091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" event={"ID":"915745e3-1528-4d5f-84a6-001471123924","Type":"ContainerStarted","Data":"1ad4a0096e5f894db159a22d01e6b99d48da341bc0b421d722d046dfaeb1e15f"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.357870 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:19 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:19 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:19 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.357929 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.363043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vbsqg" event={"ID":"bf0241bd-f637-4b8b-b78a-797549fe5da9","Type":"ContainerStarted","Data":"cc948b03dc5861fcf1adda897a33fd0c08a2d15a82e993373c8ea7bd3d78a2b8"} Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.404193 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.404381 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.904358873 +0000 UTC m=+145.777393054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.404736 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.405922 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:19.905907346 +0000 UTC m=+145.778941587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.428096 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn"] Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.512205 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.512492 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.012477896 +0000 UTC m=+145.885512087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.547289 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz"] Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.552126 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 10:56:18 +0000 UTC, rotation deadline is 2026-10-23 11:35:29.510602324 +0000 UTC Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.553578 4593 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6408h34m9.957028899s for next certificate rotation Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.614755 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.615211 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.115195738 +0000 UTC m=+145.988229929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.719140 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.720088 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.220068901 +0000 UTC m=+146.093103102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.761856 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gl968" podStartSLOduration=123.761828438 podStartE2EDuration="2m3.761828438s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:19.760589124 +0000 UTC m=+145.633623335" watchObservedRunningTime="2026-01-29 11:01:19.761828438 +0000 UTC m=+145.634862629" Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.763330 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" podStartSLOduration=123.763322831 podStartE2EDuration="2m3.763322831s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:19.723537367 +0000 UTC m=+145.596571558" watchObservedRunningTime="2026-01-29 11:01:19.763322831 +0000 UTC m=+145.636357022" Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.822982 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.823390 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.323379379 +0000 UTC m=+146.196413570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: W0129 11:01:19.840027 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6910728e_feba_4826_8447_11f4cf860c30.slice/crio-6d689f57c66e3de55ef51591b31cd1492ce48f1961be6ccc3e60f3fed038d637 WatchSource:0}: Error finding container 6d689f57c66e3de55ef51591b31cd1492ce48f1961be6ccc3e60f3fed038d637: Status 404 returned error can't find the container with id 6d689f57c66e3de55ef51591b31cd1492ce48f1961be6ccc3e60f3fed038d637 Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.925357 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:19 crc kubenswrapper[4593]: E0129 11:01:19.928209 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.42818993 +0000 UTC m=+146.301224121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.967368 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552"] Jan 29 11:01:19 crc kubenswrapper[4593]: I0129 11:01:19.986392 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.027642 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.028359 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.528343281 +0000 UTC m=+146.401377472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.129041 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.129434 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.629414577 +0000 UTC m=+146.502448768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.139985 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zv27c"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.235482 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.235875 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.735859203 +0000 UTC m=+146.608893394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.245451 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef5dc1f_d576_46dd_9de7_2a63c6d4157f.slice/crio-6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5 WatchSource:0}: Error finding container 6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5: Status 404 returned error can't find the container with id 6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5 Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.258471 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9136490_ddbf_4318_91c6_e73d74e7b599.slice/crio-4b018f89a1cca4acd2d0a8ba795cf33e5152a1661724c8be5d8624a6a90f3b3c WatchSource:0}: Error finding container 4b018f89a1cca4acd2d0a8ba795cf33e5152a1661724c8be5d8624a6a90f3b3c: Status 404 returned error can't find the container with id 4b018f89a1cca4acd2d0a8ba795cf33e5152a1661724c8be5d8624a6a90f3b3c Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.313150 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:20 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:20 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:20 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.313203 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.336495 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.337079 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.837060732 +0000 UTC m=+146.710094923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.411179 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" event={"ID":"6910728e-feba-4826-8447-11f4cf860c30","Type":"ContainerStarted","Data":"6d689f57c66e3de55ef51591b31cd1492ce48f1961be6ccc3e60f3fed038d637"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.433023 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.438782 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.439200 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:20.939187158 +0000 UTC m=+146.812221349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.491019 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" event={"ID":"02bd78b0-707f-4422-8b39-bd751a8cdcd6","Type":"ContainerStarted","Data":"7c1c7b513147e3ac358e52d2182023600324bd4cc4d0739091fb5509c46818eb"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.495691 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.495825 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.508969 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" event={"ID":"8246045d-6937-4d02-b488-24bcf2eec4bf","Type":"ContainerStarted","Data":"e06aae70c4e861c81c7cd4182c8eef519e279fc1658ede262cd444363e159ecc"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.530473 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" event={"ID":"58e36a23-974a-4afd-b226-bb194d489cf0","Type":"ContainerStarted","Data":"9015e523f1f3cc972f8aef7fc501a0654bbed5a650252aeacf03ee67aea0e98f"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.535527 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" event={"ID":"43e8598d-f86e-425e-8418-bcfb93e3bd63","Type":"ContainerStarted","Data":"f3783d891e0881e705c422a22425dc329851be8b69b4a137cddd1be32a52cace"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.536310 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.539286 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.546541 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.547007 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.046992252 +0000 UTC m=+146.920026443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.555904 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" event={"ID":"eef5dc1f-d576-46dd-9de7-2a63c6d4157f","Type":"ContainerStarted","Data":"6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.584905 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.585552 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" podStartSLOduration=124.585534391 podStartE2EDuration="2m4.585534391s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:20.574145462 +0000 UTC m=+146.447179653" watchObservedRunningTime="2026-01-29 11:01:20.585534391 +0000 UTC m=+146.458568582" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.614714 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-29j27"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.625303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8425v" event={"ID":"ccb12507-4eef-467d-885d-982c68807bda","Type":"ContainerStarted","Data":"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.647561 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.647940 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.147925765 +0000 UTC m=+147.020959946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.672055 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-8425v" podStartSLOduration=124.672017039 podStartE2EDuration="2m4.672017039s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:20.669961701 +0000 UTC m=+146.542995892" watchObservedRunningTime="2026-01-29 11:01:20.672017039 +0000 UTC m=+146.545051230" Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.676788 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod719f2fcb_45e2_4600_82d9_fbf4263201a2.slice/crio-c908198ec5167b59b9e5cb5f2ee7a3101c2d985cc58e6f004c76055b3b344767 WatchSource:0}: Error finding container c908198ec5167b59b9e5cb5f2ee7a3101c2d985cc58e6f004c76055b3b344767: Status 404 returned error can't find the container with id c908198ec5167b59b9e5cb5f2ee7a3101c2d985cc58e6f004c76055b3b344767 Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.749572 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.752515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" event={"ID":"1c91d49f-a382-4279-91c7-a43b3f1e071e","Type":"ContainerStarted","Data":"b29bfbac452a594f19138086c3d449a57600658f19ecd7acbbac7f7c3c50e774"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.752559 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" event={"ID":"1c91d49f-a382-4279-91c7-a43b3f1e071e","Type":"ContainerStarted","Data":"e642ebf4eca78625c6a6c2f89ebbe064cddcf67c3319f8518e67ac8783036146"} Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.753544 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.253512748 +0000 UTC m=+147.126546959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.754730 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aa74baf_fde3_4dad_aef0_7b8b1ae90098.slice/crio-b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef WatchSource:0}: Error finding container b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef: Status 404 returned error can't find the container with id b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef Jan 29 11:01:20 crc kubenswrapper[4593]: W0129 11:01:20.779197 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a7ffb2d_39e9_426f_9364_ebe193a5adc8.slice/crio-98624fdc0d26251d72edb6c9aa0bf22ff9b8dc38fec1822028fba9395ab4cb63 WatchSource:0}: Error finding container 98624fdc0d26251d72edb6c9aa0bf22ff9b8dc38fec1822028fba9395ab4cb63: Status 404 returned error can't find the container with id 98624fdc0d26251d72edb6c9aa0bf22ff9b8dc38fec1822028fba9395ab4cb63 Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.779499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" event={"ID":"51f11901-9a27-4368-9e6d-9ae05692c942","Type":"ContainerStarted","Data":"098bab81a052020df3698907802477042efe83403d7ec4b65346f8eb610613b2"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.815559 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" event={"ID":"dc1056e0-74e9-4be8-bcdf-92604e23a2e1","Type":"ContainerStarted","Data":"ab516fca4f079c481a8a89388efe9a298f131911c8e7d09547e623e04e04cc44"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.830024 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vbsqg" event={"ID":"bf0241bd-f637-4b8b-b78a-797549fe5da9","Type":"ContainerStarted","Data":"23eb963bc3a50dcc87c540c0aeac1e86811881b9582d9623d3e21dbf881ea281"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.834594 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ldr8c" podStartSLOduration=124.834575024 podStartE2EDuration="2m4.834575024s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:20.823206357 +0000 UTC m=+146.696240548" watchObservedRunningTime="2026-01-29 11:01:20.834575024 +0000 UTC m=+146.707609215" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.854234 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.854556 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.354542192 +0000 UTC m=+147.227576393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.868391 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" event={"ID":"9bce548b-2c64-4ac5-a797-979de4cf7656","Type":"ContainerStarted","Data":"6f3fa8227dd1a01d4a4ae4526929ee8a68020cdbbce4d38f1e42291cf196886a"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.918166 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vbsqg" podStartSLOduration=6.91814575 podStartE2EDuration="6.91814575s" podCreationTimestamp="2026-01-29 11:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:20.89236151 +0000 UTC m=+146.765395711" watchObservedRunningTime="2026-01-29 11:01:20.91814575 +0000 UTC m=+146.791179941" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.920186 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-96whs"] Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.927537 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" event={"ID":"661d5765-a5d7-4cb4-87b9-284f36dc385e","Type":"ContainerStarted","Data":"9f1d52299e8187dd965ebff851605459dcfcb9666a7a05c92d57f944764e3718"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.928234 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.932398 4593 patch_prober.go:28] interesting pod/console-operator-58897d9998-fm7cc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.932467 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" podUID="661d5765-a5d7-4cb4-87b9-284f36dc385e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.964048 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.964393 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.965065 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:20 crc kubenswrapper[4593]: E0129 11:01:20.966861 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.466839202 +0000 UTC m=+147.339873393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.986499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" event={"ID":"edf60cff-ba6c-450f-bcec-7b14d7513120","Type":"ContainerStarted","Data":"a1b0bbf083dd4815c2b6a4028f68ba1230f78cefe6ada0632169815e19d3d52b"} Jan 29 11:01:20 crc kubenswrapper[4593]: I0129 11:01:20.993274 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.028705 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" event={"ID":"478971f0-c97c-4eb1-86d2-50af06b8aafc","Type":"ContainerStarted","Data":"7416395569cb99fb4a8e8bc9561297a2a31c9aae9116f459c305e399f5bc950c"} Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.030006 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" podStartSLOduration=125.029979867 podStartE2EDuration="2m5.029979867s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:21.022446897 +0000 UTC m=+146.895481098" watchObservedRunningTime="2026-01-29 11:01:21.029979867 +0000 UTC m=+146.903014078" Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.072072 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jnw9r"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.074242 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.074575 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.574557094 +0000 UTC m=+147.447591285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: W0129 11:01:21.169910 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ee0cc5f_ef60_4aac_9a88_dd2a0c767afc.slice/crio-fc6c20bf52aee2139b8d3d882bb7de41626a8b84520d0a5e1b6cb4ffba81ed35 WatchSource:0}: Error finding container fc6c20bf52aee2139b8d3d882bb7de41626a8b84520d0a5e1b6cb4ffba81ed35: Status 404 returned error can't find the container with id fc6c20bf52aee2139b8d3d882bb7de41626a8b84520d0a5e1b6cb4ffba81ed35 Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181198 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" event={"ID":"bb259eac-6aa7-42d9-883b-2af6b63af4b8","Type":"ContainerStarted","Data":"0d1f1da0ccfcb7023e9050ac93a5de5cd880847710176ac0ddad52f400549a8f"} Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181244 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"4b018f89a1cca4acd2d0a8ba795cf33e5152a1661724c8be5d8624a6a90f3b3c"} Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181261 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rnn8b"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181280 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-t7wn4"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.181358 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.186028 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.186927 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.686876474 +0000 UTC m=+147.559910665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.193113 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.193448 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.693432608 +0000 UTC m=+147.566466799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.241950 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" podStartSLOduration=125.241924034 podStartE2EDuration="2m5.241924034s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:21.176831244 +0000 UTC m=+147.049865435" watchObservedRunningTime="2026-01-29 11:01:21.241924034 +0000 UTC m=+147.114958225" Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.259960 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.277564 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp"] Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.296137 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.313000 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.81296706 +0000 UTC m=+147.686001331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.323941 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:21 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:21 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:21 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.324005 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.413859 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.414228 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:21.914210672 +0000 UTC m=+147.787244863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: W0129 11:01:21.423858 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59084a0c_807b_47c9_b905_6e07817bcb89.slice/crio-b046d49d8fbd3ff0f6f39567887bd3b141e18ac5c2409dd80ef9787be72f9612 WatchSource:0}: Error finding container b046d49d8fbd3ff0f6f39567887bd3b141e18ac5c2409dd80ef9787be72f9612: Status 404 returned error can't find the container with id b046d49d8fbd3ff0f6f39567887bd3b141e18ac5c2409dd80ef9787be72f9612 Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.518341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.518659 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.018624061 +0000 UTC m=+147.891658252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.621446 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.621862 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.121848937 +0000 UTC m=+147.994883128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.723390 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.723546 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.22351713 +0000 UTC m=+148.096551331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.723686 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.724066 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.224055665 +0000 UTC m=+148.097089856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.824689 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.825267 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.325248434 +0000 UTC m=+148.198282625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:21 crc kubenswrapper[4593]: I0129 11:01:21.925850 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:21 crc kubenswrapper[4593]: E0129 11:01:21.926289 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.426273909 +0000 UTC m=+148.299308100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.027028 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.027614 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.527593163 +0000 UTC m=+148.400627364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.128914 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.129823 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.629808461 +0000 UTC m=+148.502842652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.224739 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" event={"ID":"0aa74baf-fde3-4dad-aef0-7b8b1ae90098","Type":"ContainerStarted","Data":"b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.232473 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.233151 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.7331307 +0000 UTC m=+148.606164891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.234805 4593 patch_prober.go:28] interesting pod/apiserver-76f77b778f-m9zzn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]log ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]etcd ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/max-in-flight-filter ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 11:01:22 crc kubenswrapper[4593]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 11:01:22 crc kubenswrapper[4593]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/openshift.io-startinformers ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 11:01:22 crc kubenswrapper[4593]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 11:01:22 crc kubenswrapper[4593]: livez check failed Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.234894 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" podUID="dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.273146 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" event={"ID":"edf60cff-ba6c-450f-bcec-7b14d7513120","Type":"ContainerStarted","Data":"4e90e15f4916d81ad815c84a464d7a3154554b360a2bfa8b0b55d27cfcb3731d"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.304626 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:22 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:22 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:22 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.304697 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.319939 4593 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g5zq7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.319996 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" podUID="43e8598d-f86e-425e-8418-bcfb93e3bd63" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.320113 4593 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g5zq7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.320174 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" podUID="43e8598d-f86e-425e-8418-bcfb93e3bd63" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.328676 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" event={"ID":"28ad6acc-fb5e-4d71-9f36-492c3b1262d2","Type":"ContainerStarted","Data":"165c5378079b51f54c98509e01a52658388b046b5c4394baa703f61a0c8ec9f3"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.338592 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.339348 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.839333769 +0000 UTC m=+148.712367970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.378626 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vtdww" event={"ID":"bb259eac-6aa7-42d9-883b-2af6b63af4b8","Type":"ContainerStarted","Data":"2776f3c70cbb7ede6321a7c87f7a751134696b83c8b00c05deb9a968a7c91fe7"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.425811 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" event={"ID":"1c91d49f-a382-4279-91c7-a43b3f1e071e","Type":"ContainerStarted","Data":"ac40e4222252a73877076cb3072f20d9c0a99b6b89d8444a35c6b1355a13ded7"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.440227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.441732 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:22.941709531 +0000 UTC m=+148.814743732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.473227 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerStarted","Data":"696cf1720196cf57c4da0b337c830ea79045db65f0636c90b3de8b14528e9492"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.479043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" event={"ID":"fae65f9f-a5ea-442a-8c78-aa650d330c4d","Type":"ContainerStarted","Data":"e217b974fa4632683cf1c5b577dcf980fe12d9e389c20e4138bfb225df22cfad"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.479090 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" event={"ID":"fae65f9f-a5ea-442a-8c78-aa650d330c4d","Type":"ContainerStarted","Data":"33e285a80680f60c5fce9274227bd82a78afd2a3d765617f063b4e13a54188f7"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.507584 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" event={"ID":"e544204e-7186-4a22-a6bf-79a5101af4b6","Type":"ContainerStarted","Data":"0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.508557 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.510561 4593 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ftchp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.510624 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.528047 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrstj" podStartSLOduration=126.528030825 podStartE2EDuration="2m6.528030825s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.485054043 +0000 UTC m=+148.358088234" watchObservedRunningTime="2026-01-29 11:01:22.528030825 +0000 UTC m=+148.401065026" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.545352 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" event={"ID":"6910728e-feba-4826-8447-11f4cf860c30","Type":"ContainerStarted","Data":"4450c9f92b23f4d5b82f78ff23480c9752cbd93f501d59b07b7c544108a5c382"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.545425 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.545781 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.045768031 +0000 UTC m=+148.918802222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.545810 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.547140 4593 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g9wvz container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.547175 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" podUID="6910728e-feba-4826-8447-11f4cf860c30" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.570394 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" event={"ID":"65e50d23-1adc-4462-9424-1d2157c2ff93","Type":"ContainerStarted","Data":"a1bb3a1d7e0f1f5e2ad1f4c3f6120cba74cc973779254be9cf207ce79d3c9f72"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.570430 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" event={"ID":"65e50d23-1adc-4462-9424-1d2157c2ff93","Type":"ContainerStarted","Data":"3dae87d680ec6d5acf3897bbd711f86697fcbfb6473637dee099b73bcd2b56ce"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.598001 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" event={"ID":"58e36a23-974a-4afd-b226-bb194d489cf0","Type":"ContainerStarted","Data":"0e7485990073d9196a875c8aca464ea8d3b4af7bf554594743fc5f93b3663142"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.614785 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" event={"ID":"59084a0c-807b-47c9-b905-6e07817bcb89","Type":"ContainerStarted","Data":"b046d49d8fbd3ff0f6f39567887bd3b141e18ac5c2409dd80ef9787be72f9612"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.626415 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rpfbq" podStartSLOduration=126.626396955 podStartE2EDuration="2m6.626396955s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.526681207 +0000 UTC m=+148.399715418" watchObservedRunningTime="2026-01-29 11:01:22.626396955 +0000 UTC m=+148.499431146" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.626805 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podStartSLOduration=126.626796467 podStartE2EDuration="2m6.626796467s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.622503137 +0000 UTC m=+148.495537338" watchObservedRunningTime="2026-01-29 11:01:22.626796467 +0000 UTC m=+148.499830658" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.647613 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.648188 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.148168255 +0000 UTC m=+149.021202456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.648402 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.649193 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.149182403 +0000 UTC m=+149.022216594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.665691 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" event={"ID":"719f2fcb-45e2-4600-82d9-fbf4263201a2","Type":"ContainerStarted","Data":"5cbe6c6ceefbd3454528d1631e16a77426547464c0e0bf6c69c03de9f7884459"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.665753 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" event={"ID":"719f2fcb-45e2-4600-82d9-fbf4263201a2","Type":"ContainerStarted","Data":"c908198ec5167b59b9e5cb5f2ee7a3101c2d985cc58e6f004c76055b3b344767"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.707749 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" event={"ID":"c5d626cc-ab7a-408c-9955-c3fc676a799b","Type":"ContainerStarted","Data":"79832379d1bbdea1bf48434932717ca1f0ed0888fea265c0dca3e98ee9699bb2"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.743581 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-c8vv4" podStartSLOduration=126.743559092 podStartE2EDuration="2m6.743559092s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.679511811 +0000 UTC m=+148.552546012" watchObservedRunningTime="2026-01-29 11:01:22.743559092 +0000 UTC m=+148.616593283" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.746567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jnw9r" event={"ID":"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd","Type":"ContainerStarted","Data":"aca853c9026fbd3692d13110a368138ee932936b640d1d1cf17bfe05b9af1428"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.750340 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.750807 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.250788194 +0000 UTC m=+149.123822385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.790403 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" podStartSLOduration=126.790377511 podStartE2EDuration="2m6.790377511s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.746439532 +0000 UTC m=+148.619473733" watchObservedRunningTime="2026-01-29 11:01:22.790377511 +0000 UTC m=+148.663411702" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.817953 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" event={"ID":"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc","Type":"ContainerStarted","Data":"fc6c20bf52aee2139b8d3d882bb7de41626a8b84520d0a5e1b6cb4ffba81ed35"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.843576 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" event={"ID":"478971f0-c97c-4eb1-86d2-50af06b8aafc","Type":"ContainerStarted","Data":"a2d60d5192d241923530c8bd5ed6cf2e230b686c0266f129683e8144da6ca5c5"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.850319 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" podStartSLOduration=126.850296766 podStartE2EDuration="2m6.850296766s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.791190524 +0000 UTC m=+148.664224725" watchObservedRunningTime="2026-01-29 11:01:22.850296766 +0000 UTC m=+148.723330967" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.850436 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jnw9r" podStartSLOduration=8.85043046 podStartE2EDuration="8.85043046s" podCreationTimestamp="2026-01-29 11:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.849785432 +0000 UTC m=+148.722819633" watchObservedRunningTime="2026-01-29 11:01:22.85043046 +0000 UTC m=+148.723464661" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.851784 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.852209 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.3521935 +0000 UTC m=+149.225227701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.859141 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" event={"ID":"8d8d97d7-c0b0-4b84-90a2-42e4c49f9d50","Type":"ContainerStarted","Data":"e3eb68f3a20819414457d7b687abdfc99613007b340ae9126017cf556fad2b6d"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.878116 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0ee22f5-d5c3-4686-ab5d-53223d05bef6" containerID="3078c01972d813c506b6d8519d9aab9bc964fd78d5df8d30ae175e731ae9564a" exitCode=0 Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.878225 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" event={"ID":"f0ee22f5-d5c3-4686-ab5d-53223d05bef6","Type":"ContainerDied","Data":"3078c01972d813c506b6d8519d9aab9bc964fd78d5df8d30ae175e731ae9564a"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.889166 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" event={"ID":"e7651ef0-a985-4314-a20a-7103624a257a","Type":"ContainerStarted","Data":"6963add583c8c165e41d2d04f97fa22d3b7c12081e62b89283d732699501fa99"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.901907 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" event={"ID":"dc1056e0-74e9-4be8-bcdf-92604e23a2e1","Type":"ContainerStarted","Data":"3cf3a2c7fa5ee0305b02c53e31347ce727cc5996fa01d68d0c1b7a391a402f94"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.906793 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerStarted","Data":"2f48cbda9004fb1cef5670cd7c470182d9032a02edc19f790e551c7da3e265f7"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.916506 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" event={"ID":"9bce548b-2c64-4ac5-a797-979de4cf7656","Type":"ContainerStarted","Data":"f2946137c5275477291e1d53969eabd7b8bdca8a4c5b713bf1318a819d020561"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.931961 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-29j27" event={"ID":"0a7ffb2d-39e9-426f-9364-ebe193a5adc8","Type":"ContainerStarted","Data":"f6685d80a88aeb1befbee546db61858e5d87768098d866a6e53fcd487269da65"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.932007 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-29j27" event={"ID":"0a7ffb2d-39e9-426f-9364-ebe193a5adc8","Type":"ContainerStarted","Data":"98624fdc0d26251d72edb6c9aa0bf22ff9b8dc38fec1822028fba9395ab4cb63"} Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.951515 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-gmw8k" podStartSLOduration=126.951492446 podStartE2EDuration="2m6.951492446s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.8958265 +0000 UTC m=+148.768860701" watchObservedRunningTime="2026-01-29 11:01:22.951492446 +0000 UTC m=+148.824526637" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.952069 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-j7hr6" podStartSLOduration=126.952062712 podStartE2EDuration="2m6.952062712s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:22.937245147 +0000 UTC m=+148.810279338" watchObservedRunningTime="2026-01-29 11:01:22.952062712 +0000 UTC m=+148.825096903" Jan 29 11:01:22 crc kubenswrapper[4593]: I0129 11:01:22.953301 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:22 crc kubenswrapper[4593]: E0129 11:01:22.954435 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.454419638 +0000 UTC m=+149.327453829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.012032 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" event={"ID":"eef5dc1f-d576-46dd-9de7-2a63c6d4157f","Type":"ContainerStarted","Data":"a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972"} Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.031161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" event={"ID":"915745e3-1528-4d5f-84a6-001471123924","Type":"ContainerStarted","Data":"419886fe23b41f2860852302590cbbe00c425ce1a54ec11e7d5a3c0cfc693830"} Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.127319 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.132626 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.63261323 +0000 UTC m=+149.505647421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.229212 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.230305 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.730286051 +0000 UTC m=+149.603320242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.330409 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.330705 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.830694548 +0000 UTC m=+149.703728739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.382604 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" podStartSLOduration=83.382581359 podStartE2EDuration="1m23.382581359s" podCreationTimestamp="2026-01-29 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:23.379893595 +0000 UTC m=+149.252927806" watchObservedRunningTime="2026-01-29 11:01:23.382581359 +0000 UTC m=+149.255615550" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.383212 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:23 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:23 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:23 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.383464 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.403053 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" event={"ID":"8246045d-6937-4d02-b488-24bcf2eec4bf","Type":"ContainerStarted","Data":"35164e54a60485a7dbe013cf824db9b1209cd122707ac9c6ccc1e471f29e4abb"} Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.434228 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.435513 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:23.935492479 +0000 UTC m=+149.808526670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.569879 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.570343 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.0703283 +0000 UTC m=+149.943362491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.689255 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.689624 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.189607495 +0000 UTC m=+150.062641686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.874247 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.874323 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.874353 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.874394 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.886969 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.887560 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.387523069 +0000 UTC m=+150.260557270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.918301 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.965534 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.976097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:23 crc kubenswrapper[4593]: I0129 11:01:23.976329 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:23 crc kubenswrapper[4593]: E0129 11:01:23.976791 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.476762004 +0000 UTC m=+150.349796195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:23.980610 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:23.994354 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" podStartSLOduration=127.994338835 podStartE2EDuration="2m7.994338835s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:23.869616498 +0000 UTC m=+149.742650699" watchObservedRunningTime="2026-01-29 11:01:23.994338835 +0000 UTC m=+149.867373016" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.085442 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.086065 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.58604858 +0000 UTC m=+150.459082771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.088692 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.103900 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.104298 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.146556 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pf5p2" podStartSLOduration=128.146539041 podStartE2EDuration="2m8.146539041s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.059496027 +0000 UTC m=+149.932530218" watchObservedRunningTime="2026-01-29 11:01:24.146539041 +0000 UTC m=+150.019573232" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.147563 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ct922" podStartSLOduration=128.14755802 podStartE2EDuration="2m8.14755802s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.14614497 +0000 UTC m=+150.019179161" watchObservedRunningTime="2026-01-29 11:01:24.14755802 +0000 UTC m=+150.020592211" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.190054 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.190462 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.690444848 +0000 UTC m=+150.563479039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.208838 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-9cr59" podStartSLOduration=128.208819593 podStartE2EDuration="2m8.208819593s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.205927842 +0000 UTC m=+150.078962033" watchObservedRunningTime="2026-01-29 11:01:24.208819593 +0000 UTC m=+150.081853784" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.209735 4593 patch_prober.go:28] interesting pod/console-operator-58897d9998-fm7cc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.209792 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" podUID="661d5765-a5d7-4cb4-87b9-284f36dc385e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.210452 4593 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-g5zq7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.210491 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" podUID="43e8598d-f86e-425e-8418-bcfb93e3bd63" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.274075 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gz9wd" podStartSLOduration=128.274061317 podStartE2EDuration="2m8.274061317s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.27274877 +0000 UTC m=+150.145782971" watchObservedRunningTime="2026-01-29 11:01:24.274061317 +0000 UTC m=+150.147095508" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.293415 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.293696 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.793684555 +0000 UTC m=+150.666718746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.392277 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" event={"ID":"719f2fcb-45e2-4600-82d9-fbf4263201a2","Type":"ContainerStarted","Data":"7dcea9124afbdb503f66212747ce1aa67316de754f6a9cd930f9fb2d93776a2e"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.392702 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.395269 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.395656 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:24.895642376 +0000 UTC m=+150.768676567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.401953 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:24 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:24 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:24 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.401999 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.492745 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-29j27" event={"ID":"0a7ffb2d-39e9-426f-9364-ebe193a5adc8","Type":"ContainerStarted","Data":"559af5a67e9a8e3d351c94bfa87518901de499103490e8b9d574bd2b89a0accd"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.493880 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-29j27" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.494744 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" podStartSLOduration=128.494720617 podStartE2EDuration="2m8.494720617s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.492479984 +0000 UTC m=+150.365514185" watchObservedRunningTime="2026-01-29 11:01:24.494720617 +0000 UTC m=+150.367754808" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.502003 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.502318 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.002303379 +0000 UTC m=+150.875337570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.509563 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" event={"ID":"58e36a23-974a-4afd-b226-bb194d489cf0","Type":"ContainerStarted","Data":"46245db333e8ce753863ff1f2b5f45124a4876dbab3c78b453d6395af231093a"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.519048 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" event={"ID":"0aa74baf-fde3-4dad-aef0-7b8b1ae90098","Type":"ContainerStarted","Data":"134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.519851 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.521129 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.521165 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.617697 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.618735 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.118715163 +0000 UTC m=+150.991749374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.647977 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-96whs" event={"ID":"c5d626cc-ab7a-408c-9955-c3fc676a799b","Type":"ContainerStarted","Data":"b2a61f706f8d76b4219fdd3d32e3038a72a77fd42e0f2de5afca7281ce2981ae"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.706370 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerStarted","Data":"b2542bf8201794dfa409603a8c0db5fbf7fc73188de204efed4719fcb18d34d5"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.706448 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerStarted","Data":"bf865df54dd7eea44cdf14782b35051e879ed53d58fecdb5dfaaad1b3e3ed384"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.716825 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" event={"ID":"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc","Type":"ContainerStarted","Data":"9beb7a130a2815145e1e969bda1d459ac990a7a62677a18a7abc68a72290e404"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.724391 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.724741 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.224728398 +0000 UTC m=+151.097762579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.725358 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" event={"ID":"59084a0c-807b-47c9-b905-6e07817bcb89","Type":"ContainerStarted","Data":"19cbdf2a6be00984d37346a7d481c69738c6ffaad2afee095f61fbfc754a3a9e"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.725971 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.726864 4593 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zpjgp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.726916 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podUID="59084a0c-807b-47c9-b905-6e07817bcb89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.795697 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jnw9r" event={"ID":"77144fd8-36e8-4c75-ae40-fd3c9bb1a6fd","Type":"ContainerStarted","Data":"142bada782bce23bd62180bbfe11e11d2a8c72b3003d42ba1e6e711468c4cfc6"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.798070 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerStarted","Data":"bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.798467 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.800862 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.800902 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.801775 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" event={"ID":"28ad6acc-fb5e-4d71-9f36-492c3b1262d2","Type":"ContainerStarted","Data":"4c14ef3125a849785013eea9aabd2bfaa194654053572172ffc1115dde456e5e"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.803382 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.804648 4593 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vlh9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.804718 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" podUID="28ad6acc-fb5e-4d71-9f36-492c3b1262d2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.807225 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-29j27" podStartSLOduration=10.807209234 podStartE2EDuration="10.807209234s" podCreationTimestamp="2026-01-29 11:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.671994383 +0000 UTC m=+150.545028584" watchObservedRunningTime="2026-01-29 11:01:24.807209234 +0000 UTC m=+150.680243425" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.823980 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" event={"ID":"edf60cff-ba6c-450f-bcec-7b14d7513120","Type":"ContainerStarted","Data":"1df8dddde5fc393630c64585fc5c62998195a9c2d108f207e9e5b63f08bd2f66"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.827670 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.828759 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.328681005 +0000 UTC m=+151.201715236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.845799 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qjbwn" event={"ID":"dc1056e0-74e9-4be8-bcdf-92604e23a2e1","Type":"ContainerStarted","Data":"0260b49652051fa32135ad0e9703d42815dc7d97c24a46339c85fdc3235e9e35"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.859815 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" event={"ID":"e7651ef0-a985-4314-a20a-7103624a257a","Type":"ContainerStarted","Data":"62886bd06e3a980b0a74bd1e6271c27a56ec4c847728204bde958ae5cc1cb533"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.859853 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" event={"ID":"e7651ef0-a985-4314-a20a-7103624a257a","Type":"ContainerStarted","Data":"d70bdd805acd59fa83447572f6c4d9bb1cec91d0ad6fe98200f1231bba31ec13"} Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.875689 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-8b552" podStartSLOduration=128.875673699 podStartE2EDuration="2m8.875673699s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.871861042 +0000 UTC m=+150.744895243" watchObservedRunningTime="2026-01-29 11:01:24.875673699 +0000 UTC m=+150.748707890" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.876065 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podStartSLOduration=128.876060179 podStartE2EDuration="2m8.876060179s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.807895403 +0000 UTC m=+150.680929594" watchObservedRunningTime="2026-01-29 11:01:24.876060179 +0000 UTC m=+150.749094370" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.935654 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:24 crc kubenswrapper[4593]: I0129 11:01:24.935843 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g9wvz" Jan 29 11:01:24 crc kubenswrapper[4593]: E0129 11:01:24.936124 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.436109269 +0000 UTC m=+151.309143460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.038104 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.040273 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.54025397 +0000 UTC m=+151.413288171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.164435 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.164857 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.664842694 +0000 UTC m=+151.537876885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.271967 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.272274 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.772245148 +0000 UTC m=+151.645279369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.332202 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:25 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:25 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:25 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.332276 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.339351 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g5zq7" Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.376678 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.377770 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.877755348 +0000 UTC m=+151.750789539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.386578 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-l64wd" podStartSLOduration=129.386538694 podStartE2EDuration="2m9.386538694s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:24.906237723 +0000 UTC m=+150.779271914" watchObservedRunningTime="2026-01-29 11:01:25.386538694 +0000 UTC m=+151.259572885" Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.477545 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.477753 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:25.977726043 +0000 UTC m=+151.850760284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.656374 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.656707 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.156695078 +0000 UTC m=+152.029729259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.760243 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:25 crc kubenswrapper[4593]: E0129 11:01:25.760603 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.260587623 +0000 UTC m=+152.133621814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.990806 4593 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ftchp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.991130 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:01:25 crc kubenswrapper[4593]: I0129 11:01:25.993367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.009278 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.509253165 +0000 UTC m=+152.382287356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.021081 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.021481 4593 patch_prober.go:28] interesting pod/apiserver-76f77b778f-m9zzn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]log ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]etcd ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/max-in-flight-filter ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 29 11:01:26 crc kubenswrapper[4593]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/project.openshift.io-projectcache ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/openshift.io-startinformers ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 29 11:01:26 crc kubenswrapper[4593]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 11:01:26 crc kubenswrapper[4593]: livez check failed Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.021535 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" podUID="dabb0548-dbdb-438c-a98c-2eb6e2b2c0d9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.068215 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" event={"ID":"8ee0cc5f-ef60-4aac-9a88-dd2a0c767afc","Type":"ContainerStarted","Data":"ba0d8002780561503b14f07f45dc8c892e8c7cb26b80ec3c0f96e63d823f0f56"} Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.071231 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"dca828c12b7e5ed017004f46bc1bc2848909e5feb8de8ea119f476e97237367d"} Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.075578 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.075621 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.077541 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" event={"ID":"f0ee22f5-d5c3-4686-ab5d-53223d05bef6","Type":"ContainerStarted","Data":"41c7ed3294e3b4ac4e494b9a971b8c7eb3897a70618dbc3befe8c9f77d288938"} Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.078189 4593 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zpjgp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.078245 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podUID="59084a0c-807b-47c9-b905-6e07817bcb89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.079167 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.079186 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.085269 4593 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vlh9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.085303 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" podUID="28ad6acc-fb5e-4d71-9f36-492c3b1262d2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.095324 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.095737 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.595720773 +0000 UTC m=+152.468754964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.206283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.212531 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.7125156 +0000 UTC m=+152.585549791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.315086 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.315427 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:26.815407396 +0000 UTC m=+152.688441587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.398043 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-vdt9h" podStartSLOduration=130.398022826 podStartE2EDuration="2m10.398022826s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:26.134254391 +0000 UTC m=+152.007288582" watchObservedRunningTime="2026-01-29 11:01:26.398022826 +0000 UTC m=+152.271057017" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.398142 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" podStartSLOduration=130.398138399 podStartE2EDuration="2m10.398138399s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:25.38818077 +0000 UTC m=+151.261214961" watchObservedRunningTime="2026-01-29 11:01:26.398138399 +0000 UTC m=+152.271172590" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.531037 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.531467 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.031452707 +0000 UTC m=+152.904486898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.605056 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:26 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:26 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:26 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.605401 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.634535 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.634918 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.1349027 +0000 UTC m=+153.007936891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.647064 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef5dc1f_d576_46dd_9de7_2a63c6d4157f.slice/crio-a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeef5dc1f_d576_46dd_9de7_2a63c6d4157f.slice/crio-conmon-a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.765818 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.766197 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.266183141 +0000 UTC m=+153.139217332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.867501 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.867872 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.367839164 +0000 UTC m=+153.240873355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.885730 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" podStartSLOduration=130.885708472 podStartE2EDuration="2m10.885708472s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:26.76938469 +0000 UTC m=+152.642418891" watchObservedRunningTime="2026-01-29 11:01:26.885708472 +0000 UTC m=+152.758742663" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.886889 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.887593 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.901931 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.924697 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 11:01:26 crc kubenswrapper[4593]: I0129 11:01:26.969727 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:26 crc kubenswrapper[4593]: E0129 11:01:26.970212 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.470192075 +0000 UTC m=+153.343226266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.064758 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.075295 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.075585 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.075659 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.075770 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.575750397 +0000 UTC m=+153.448784578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.084911 4593 generic.go:334] "Generic (PLEG): container finished" podID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" containerID="a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972" exitCode=0 Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.086561 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" event={"ID":"eef5dc1f-d576-46dd-9de7-2a63c6d4157f","Type":"ContainerDied","Data":"a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972"} Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.089004 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.089061 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.089128 4593 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vlh9s container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.089147 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" podUID="28ad6acc-fb5e-4d71-9f36-492c3b1262d2" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.094016 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podStartSLOduration=131.094000747 podStartE2EDuration="2m11.094000747s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:26.979506735 +0000 UTC m=+152.852540936" watchObservedRunningTime="2026-01-29 11:01:27.094000747 +0000 UTC m=+152.967034938" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.098121 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.178023 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.178496 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.178538 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.178593 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.179362 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.179765 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.679749884 +0000 UTC m=+153.552784075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.191100 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.197804 4593 patch_prober.go:28] interesting pod/console-f9d7485db-8425v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.197879 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8425v" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.244812 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.280426 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.282122 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.782106157 +0000 UTC m=+153.655140348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.332023 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.340595 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-t7wn4" podStartSLOduration=131.340577031 podStartE2EDuration="2m11.340577031s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:27.154846739 +0000 UTC m=+153.027880930" watchObservedRunningTime="2026-01-29 11:01:27.340577031 +0000 UTC m=+153.213611222" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.359898 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:27 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:27 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:27 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.359940 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.384356 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.384651 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.884623144 +0000 UTC m=+153.757657325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.490519 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.492992 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:27.992952342 +0000 UTC m=+153.865986543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.516288 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.531352 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.531401 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.532563 4593 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-djdmx container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.532794 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" podUID="f0ee22f5-d5c3-4686-ab5d-53223d05bef6" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.548728 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rnn8b" podStartSLOduration=131.548708622 podStartE2EDuration="2m11.548708622s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:27.449733924 +0000 UTC m=+153.322768135" watchObservedRunningTime="2026-01-29 11:01:27.548708622 +0000 UTC m=+153.421742813" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.592490 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.593142 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.093130174 +0000 UTC m=+153.966164355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.695425 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.695811 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.195791214 +0000 UTC m=+154.068825405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.911875 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.911931 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.912015 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.912035 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.912407 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:27 crc kubenswrapper[4593]: E0129 11:01:27.912763 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.412742241 +0000 UTC m=+154.285776532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.964587 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vlh9s" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.976455 4593 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zpjgp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.976509 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podUID="59084a0c-807b-47c9-b905-6e07817bcb89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.976595 4593 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-zpjgp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.976615 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" podUID="59084a0c-807b-47c9-b905-6e07817bcb89" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.994053 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.994115 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.994442 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:01:27 crc kubenswrapper[4593]: I0129 11:01:27.994490 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.164586 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.164754 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.664729357 +0000 UTC m=+154.537763548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.164908 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.165282 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.665266931 +0000 UTC m=+154.538301122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.181151 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"ad3f849f3006828d0a15e797bdea7fed3078f0652a5bc01a59b83a6d0ee24a6d"} Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.188038 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d1c6dffceda9bbdd2912bf97b95c997f77c990bbd0911e7d7180592727745739"} Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.188106 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ba040d1f8f92a8dc180bbd9b343662b333e547072f578c81646bb33e7c310983"} Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.188331 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.192677 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b3c84013b146db0c242e89fe2706b26110f225b6ef2d4f806c94e09a8861298e"} Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.375343 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.376017 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.875956843 +0000 UTC m=+154.748991034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.443136 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:28 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:28 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:28 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.443571 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.476736 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.483170 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:28.98315215 +0000 UTC m=+154.856186341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.492031 4593 patch_prober.go:28] interesting pod/console-operator-58897d9998-fm7cc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.492121 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" podUID="661d5765-a5d7-4cb4-87b9-284f36dc385e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.615601 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.615850 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.11582081 +0000 UTC m=+154.988855001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.616146 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.616415 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.116402367 +0000 UTC m=+154.989436558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.682574 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" podStartSLOduration=132.682554776 podStartE2EDuration="2m12.682554776s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:28.67268256 +0000 UTC m=+154.545716761" watchObservedRunningTime="2026-01-29 11:01:28.682554776 +0000 UTC m=+154.555588967" Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.811958 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.812308 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.312291953 +0000 UTC m=+155.185326144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:28 crc kubenswrapper[4593]: I0129 11:01:28.917248 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:28 crc kubenswrapper[4593]: E0129 11:01:28.917594 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.417582478 +0000 UTC m=+155.290616669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.039645 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.040015 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.539999311 +0000 UTC m=+155.413033502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.141025 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.141375 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.641359174 +0000 UTC m=+155.514393365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.241796 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.242298 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.742281557 +0000 UTC m=+155.615315748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.268179 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"b4c4530ccf25a0bf81f49c7a364bffb6ef5c4571a43866b2820656d70677c2ae"} Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.270303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"518852f3ae67c727bf2c9699bb1ebbd7a5343979c6a650ae839125f6f5a77375"} Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.320551 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:29 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:29 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:29 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.320661 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.379218 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.379695 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.879676079 +0000 UTC m=+155.752710270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.488414 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.490039 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:29.990002813 +0000 UTC m=+155.863037004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.590381 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.590946 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.090933026 +0000 UTC m=+155.963967217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.698068 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.698487 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.198471283 +0000 UTC m=+156.071505464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:29 crc kubenswrapper[4593]: I0129 11:01:29.856251 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:29 crc kubenswrapper[4593]: E0129 11:01:29.856930 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.356914834 +0000 UTC m=+156.229949025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.063373 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.063693 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.563669334 +0000 UTC m=+156.436703525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.256410 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.256715 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.756703982 +0000 UTC m=+156.629738173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.315628 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:30 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:30 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:30 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.315986 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.337175 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.469368 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") pod \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.469808 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.469818 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.969782519 +0000 UTC m=+156.842816710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.469888 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") pod \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.469913 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") pod \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\" (UID: \"eef5dc1f-d576-46dd-9de7-2a63c6d4157f\") " Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.470081 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.470169 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume" (OuterVolumeSpecName: "config-volume") pod "eef5dc1f-d576-46dd-9de7-2a63c6d4157f" (UID: "eef5dc1f-d576-46dd-9de7-2a63c6d4157f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.470489 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:30.970473619 +0000 UTC m=+156.843507820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.484043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"005a7ceadef3d52b7889d079a191cf32cd310968eb816c46c1e7caa730904d30"} Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.484095 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9637a3e2b6d22746d4b44f195443a4359ebe4cf5b08dd5c909a9789fef96f476"} Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.493509 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.493667 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm" event={"ID":"eef5dc1f-d576-46dd-9de7-2a63c6d4157f","Type":"ContainerDied","Data":"6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5"} Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.493696 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e9375090f14ff59ea759c16737f4727f94c4e541ab0f6f5ae3c71787d1187c5" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.531681 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eef5dc1f-d576-46dd-9de7-2a63c6d4157f" (UID: "eef5dc1f-d576-46dd-9de7-2a63c6d4157f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.532157 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg" (OuterVolumeSpecName: "kube-api-access-95lmg") pod "eef5dc1f-d576-46dd-9de7-2a63c6d4157f" (UID: "eef5dc1f-d576-46dd-9de7-2a63c6d4157f"). InnerVolumeSpecName "kube-api-access-95lmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.573213 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.573494 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.573514 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95lmg\" (UniqueName: \"kubernetes.io/projected/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-kube-api-access-95lmg\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.573529 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef5dc1f-d576-46dd-9de7-2a63c6d4157f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.573630 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.073614942 +0000 UTC m=+156.946649133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.733167 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.734340 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.234329196 +0000 UTC m=+157.107363387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.836276 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.836606 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.336587746 +0000 UTC m=+157.209621937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.946355 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:30 crc kubenswrapper[4593]: E0129 11:01:30.946702 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.446689165 +0000 UTC m=+157.319723356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.992038 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:30 crc kubenswrapper[4593]: I0129 11:01:30.999267 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-m9zzn" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.046056 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.046960 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.047128 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.547106943 +0000 UTC m=+157.420141134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.047487 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.047784 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.547776711 +0000 UTC m=+157.420810902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.226857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.227667 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.727651351 +0000 UTC m=+157.600685532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.324544 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:31 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:31 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:31 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.324667 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.330246 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.330665 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.830653621 +0000 UTC m=+157.703687812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.462990 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.463256 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:31.963240258 +0000 UTC m=+157.836274449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.466861 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.467046 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" containerName="collect-profiles" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.467062 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" containerName="collect-profiles" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.467169 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" containerName="collect-profiles" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.467868 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.508393 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"54e3f9bd-cf5f-4361-81b2-78571380f93f","Type":"ContainerStarted","Data":"412596aea7ed79508efc55009f025bcad32104c84207c15c5c4be80493ef4961"} Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.511085 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" event={"ID":"e9136490-ddbf-4318-91c6-e73d74e7b599","Type":"ContainerStarted","Data":"9e13240e319463e4bf3d8598ae9956ab8cee414615315afe26dd555048869166"} Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.511254 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.525039 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.563880 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.564492 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.064473029 +0000 UTC m=+157.937507240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.640096 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.640686 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.642666 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zv27c" podStartSLOduration=17.642628405 podStartE2EDuration="17.642628405s" podCreationTimestamp="2026-01-29 11:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:31.635906456 +0000 UTC m=+157.508940647" watchObservedRunningTime="2026-01-29 11:01:31.642628405 +0000 UTC m=+157.515662596" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.648733 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.648835 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.665285 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.665562 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.665603 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.665660 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.666337 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.166313757 +0000 UTC m=+158.039347948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.767327 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.767919 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.767468 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768051 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768094 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768170 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.768191 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.768722 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.26871146 +0000 UTC m=+158.141745651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.769042 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.772881 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.807943 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.816552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") pod \"certified-operators-qdz2v\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.851540 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.852742 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.868978 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869264 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.869304 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.369278352 +0000 UTC m=+158.242312573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869351 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869372 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869384 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869498 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869582 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869658 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869684 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869731 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.869783 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.870316 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.37030636 +0000 UTC m=+158.243340571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.931755 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.953145 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.964012 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970487 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.970690 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.470671387 +0000 UTC m=+158.343705578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970727 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970752 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970798 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970835 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970865 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970890 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.970929 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: E0129 11:01:31.971086 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.471075548 +0000 UTC m=+158.344109739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.971258 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.971391 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.971513 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:31 crc kubenswrapper[4593]: I0129 11:01:31.971757 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.075042 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.083311 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.084642 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.584598323 +0000 UTC m=+158.457632514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.085083 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.085510 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.585493898 +0000 UTC m=+158.458528089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.118948 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.136868 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") pod \"certified-operators-fgg5s\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.162555 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") pod \"community-operators-lf9gr\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.165565 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.173347 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.176619 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.187223 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.187361 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.187386 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.187478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.187580 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.687564182 +0000 UTC m=+158.560598373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.211322 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.334911 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.335746 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.335777 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.335867 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.335903 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.336347 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.336398 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.336730 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.836714283 +0000 UTC m=+158.709748534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.341899 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:32 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:32 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:32 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.341964 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.424947 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.436577 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.936562305 +0000 UTC m=+158.809596496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.436500 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.436905 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.437159 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:32.93713601 +0000 UTC m=+158.810170201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.490821 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") pod \"community-operators-w7gmb\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.571384 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.572161 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.572462 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.072448144 +0000 UTC m=+158.945482335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.581990 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.633691 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-djdmx" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.765870 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"54e3f9bd-cf5f-4361-81b2-78571380f93f","Type":"ContainerStarted","Data":"15afaa0d2878e6c1cc1e59308afdc3dd8e09e8f7b2a5941c77353c3358c20af0"} Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.767613 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-29j27" Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.769924 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.772108 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.271849629 +0000 UTC m=+159.144883820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:32 crc kubenswrapper[4593]: I0129 11:01:32.932745 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:32 crc kubenswrapper[4593]: E0129 11:01:32.933712 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.433695905 +0000 UTC m=+159.306730096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.065182 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.065473 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.565461599 +0000 UTC m=+159.438495790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.177818 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.179197 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.679181588 +0000 UTC m=+159.552215779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.301392 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.301732 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.801720585 +0000 UTC m=+159.674754776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.318858 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:33 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:33 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:33 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.318908 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.423298 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=7.423278474 podStartE2EDuration="7.423278474s" podCreationTimestamp="2026-01-29 11:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:33.010167633 +0000 UTC m=+158.883201824" watchObservedRunningTime="2026-01-29 11:01:33.423278474 +0000 UTC m=+159.296312665" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.425539 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.426798 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.427693 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.428222 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:33.928204321 +0000 UTC m=+159.801238512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.448969 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.493982 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.528780 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.528829 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.528862 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.528898 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.529441 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.029427462 +0000 UTC m=+159.902461653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.664745 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.665098 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.665153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.665210 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.666045 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.16597417 +0000 UTC m=+160.039008361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.669324 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.670448 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.726962 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") pod \"redhat-marketplace-tvwft\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.825439 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.826100 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.326079757 +0000 UTC m=+160.199113948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.837962 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.900178 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.901152 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.914103 4593 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.928219 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.928414 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.928495 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:33 crc kubenswrapper[4593]: I0129 11:01:33.928604 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:33 crc kubenswrapper[4593]: E0129 11:01:33.929454 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.429435627 +0000 UTC m=+160.302469818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.028443 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.028521 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.029325 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.029379 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.029450 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.029476 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.029758 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.529746912 +0000 UTC m=+160.402781103 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.030394 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.030774 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.036390 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.130931 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.131582 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.631560029 +0000 UTC m=+160.504594220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.133288 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") pod \"redhat-marketplace-69z82\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.240988 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.285050 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.78502992 +0000 UTC m=+160.658064111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.347323 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:34 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:34 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:34 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.347373 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.347839 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.372220 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.372686 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.872664881 +0000 UTC m=+160.745699072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.474819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.475111 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:34.975099025 +0000 UTC m=+160.848133216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.577792 4593 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T11:01:33.914131679Z","Handler":null,"Name":""} Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.587102 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.587460 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:35.087443016 +0000 UTC m=+160.960477207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.741185 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.741510 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 11:01:35.241499993 +0000 UTC m=+161.114534184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-g72zl" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: W0129 11:01:34.775671 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d47516f_05e5_4f96_bf5a_c4251af51b6b.slice/crio-96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585 WatchSource:0}: Error finding container 96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585: Status 404 returned error can't find the container with id 96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585 Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.814836 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.820879 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.822268 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.829924 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.838734 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.842114 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.842523 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.842550 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.842612 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:34 crc kubenswrapper[4593]: E0129 11:01:34.917760 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 11:01:35.417733031 +0000 UTC m=+161.290767222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.917925 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.970023 4593 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 11:01:34 crc kubenswrapper[4593]: I0129 11:01:34.970049 4593 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.036967 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.037153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.037186 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.037312 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.046029 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.046340 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.178990 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") pod \"redhat-operators-tm7d7\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.236746 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgg5s" event={"ID":"695d677a-4519-4ff0-9c6a-cbc902b00ee5","Type":"ContainerStarted","Data":"73c935e8b979b7dc8ab160b89b0aa92943613ba07d23ca3617474e48390b50f1"} Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.237087 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdz2v" event={"ID":"3d47516f-05e5-4f96-bf5a-c4251af51b6b","Type":"ContainerStarted","Data":"96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585"} Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.239831 4593 generic.go:334] "Generic (PLEG): container finished" podID="54e3f9bd-cf5f-4361-81b2-78571380f93f" containerID="15afaa0d2878e6c1cc1e59308afdc3dd8e09e8f7b2a5941c77353c3358c20af0" exitCode=0 Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.239867 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"54e3f9bd-cf5f-4361-81b2-78571380f93f","Type":"ContainerDied","Data":"15afaa0d2878e6c1cc1e59308afdc3dd8e09e8f7b2a5941c77353c3358c20af0"} Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.265668 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.265756 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.267093 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.382725 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.382768 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.382811 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.388061 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:35 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:35 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:35 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.388120 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.394470 4593 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.394518 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.474061 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.491024 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.491082 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.491148 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.492597 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.493511 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.563528 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.636210 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") pod \"redhat-operators-cqhd7\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.767986 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.773576 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.946996 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-g72zl\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:35 crc kubenswrapper[4593]: I0129 11:01:35.994429 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.145762 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:36 crc kubenswrapper[4593]: W0129 11:01:36.220182 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfb142e67_1809_4b4f_91d6_1c745a85cb13.slice/crio-d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c WatchSource:0}: Error finding container d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c: Status 404 returned error can't find the container with id d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.259321 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.337685 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:36 crc kubenswrapper[4593]: [-]has-synced failed: reason withheld Jan 29 11:01:36 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:36 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.337740 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.387661 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lf9gr" event={"ID":"9c000e16-ab7a-4247-99da-74ea62d94b89","Type":"ContainerStarted","Data":"e852468ceed93d241feec7b7965eaf616d41cdfd72c07bd89b3ac0aca81937b9"} Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.397387 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.429306 4593 generic.go:334] "Generic (PLEG): container finished" podID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" containerID="b9d5c7d4701eae15759c1c9b230bf47aaf13c122f4acea86bd71b0030082917d" exitCode=0 Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.429880 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgg5s" event={"ID":"695d677a-4519-4ff0-9c6a-cbc902b00ee5","Type":"ContainerDied","Data":"b9d5c7d4701eae15759c1c9b230bf47aaf13c122f4acea86bd71b0030082917d"} Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.436356 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.473833 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb142e67-1809-4b4f-91d6-1c745a85cb13","Type":"ContainerStarted","Data":"d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c"} Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.495382 4593 generic.go:334] "Generic (PLEG): container finished" podID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" containerID="45fd11091e4829626417cd96b671777720a463c182e9d6f349c55edbbe7126c6" exitCode=0 Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.496651 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdz2v" event={"ID":"3d47516f-05e5-4f96-bf5a-c4251af51b6b","Type":"ContainerDied","Data":"45fd11091e4829626417cd96b671777720a463c182e9d6f349c55edbbe7126c6"} Jan 29 11:01:36 crc kubenswrapper[4593]: I0129 11:01:36.544428 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.163490 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.164541 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.203064 4593 patch_prober.go:28] interesting pod/console-f9d7485db-8425v container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.203120 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-8425v" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.355809 4593 patch_prober.go:28] interesting pod/router-default-5444994796-xx52v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 11:01:37 crc kubenswrapper[4593]: [+]has-synced ok Jan 29 11:01:37 crc kubenswrapper[4593]: [+]process-running ok Jan 29 11:01:37 crc kubenswrapper[4593]: healthz check failed Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.356145 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xx52v" podUID="9b0e8a32-3284-4c1d-9a6d-3fda064ce2fc" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.457938 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-fm7cc" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.547464 4593 generic.go:334] "Generic (PLEG): container finished" podID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" containerID="1bf75ace58181af9f0cccb28ad84d5dd8c16c8b69d21079288e4029c1048cd89" exitCode=0 Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.547515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7gmb" event={"ID":"da7a9394-5c19-4a9e-9c6d-652b3ce08477","Type":"ContainerDied","Data":"1bf75ace58181af9f0cccb28ad84d5dd8c16c8b69d21079288e4029c1048cd89"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.547539 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7gmb" event={"ID":"da7a9394-5c19-4a9e-9c6d-652b3ce08477","Type":"ContainerStarted","Data":"72aa027856b0ef03a57066a814eb40eddf13ecfd2d1c62024902a4d79111cf83"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.573961 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69z82" event={"ID":"e424f176-80e8-4029-a500-097e1d9e5b1e","Type":"ContainerStarted","Data":"eef621985e16727acc46b16908219680b25248fd848eacdfa61bcd853a7c18ac"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.583590 4593 generic.go:334] "Generic (PLEG): container finished" podID="9c000e16-ab7a-4247-99da-74ea62d94b89" containerID="8e093f0363d31a3b87d3f9991c3433e34b34cbb53e07ea1c58a964d993b8be1a" exitCode=0 Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.583687 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lf9gr" event={"ID":"9c000e16-ab7a-4247-99da-74ea62d94b89","Type":"ContainerDied","Data":"8e093f0363d31a3b87d3f9991c3433e34b34cbb53e07ea1c58a964d993b8be1a"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.764915 4593 generic.go:334] "Generic (PLEG): container finished" podID="6ce733ca-85e0-43f9-a444-9703d600da63" containerID="ee4825fff37e0ca04b8b8e3c87e01fed5f500f91478778493b455fcf75dfd5d6" exitCode=0 Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.764959 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvwft" event={"ID":"6ce733ca-85e0-43f9-a444-9703d600da63","Type":"ContainerDied","Data":"ee4825fff37e0ca04b8b8e3c87e01fed5f500f91478778493b455fcf75dfd5d6"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.764984 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvwft" event={"ID":"6ce733ca-85e0-43f9-a444-9703d600da63","Type":"ContainerStarted","Data":"5a2bdd7e5cb75db5cc0318b63cd7ca3e8135afeaf117d553a67933c149ec867e"} Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.778696 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.778712 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.778742 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:37 crc kubenswrapper[4593]: I0129 11:01:37.778763 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.009482 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.015213 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-zpjgp" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.037968 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.090753 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl"] Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.096685 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.105398 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.178359 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d229804-724c-4e21-89ac-e3369b615389-metrics-certs\") pod \"network-metrics-daemon-7jm9m\" (UID: \"7d229804-724c-4e21-89ac-e3369b615389\") " pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.199987 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") pod \"54e3f9bd-cf5f-4361-81b2-78571380f93f\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.200044 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") pod \"54e3f9bd-cf5f-4361-81b2-78571380f93f\" (UID: \"54e3f9bd-cf5f-4361-81b2-78571380f93f\") " Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.201379 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "54e3f9bd-cf5f-4361-81b2-78571380f93f" (UID: "54e3f9bd-cf5f-4361-81b2-78571380f93f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.203683 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "54e3f9bd-cf5f-4361-81b2-78571380f93f" (UID: "54e3f9bd-cf5f-4361-81b2-78571380f93f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.299901 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.331363 4593 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/54e3f9bd-cf5f-4361-81b2-78571380f93f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.331400 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54e3f9bd-cf5f-4361-81b2-78571380f93f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.335655 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.349793 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xx52v" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.388452 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7jm9m" Jan 29 11:01:38 crc kubenswrapper[4593]: W0129 11:01:38.416805 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3be8312_dfdd_4359_b8c8_d9b8158fdab4.slice/crio-e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4 WatchSource:0}: Error finding container e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4: Status 404 returned error can't find the container with id e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4 Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.888334 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" event={"ID":"066b2b93-4946-44cf-9757-05c8282cb7a3","Type":"ContainerStarted","Data":"fb99d447e5189720ac881b538d20b70d4e3aef55d12b3a424d01a9dc39152640"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.903881 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-6dlwj_5d8acfc6-0334-4294-8dd6-c3091ebb69d3/cluster-samples-operator/0.log" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.903934 4593 generic.go:334] "Generic (PLEG): container finished" podID="5d8acfc6-0334-4294-8dd6-c3091ebb69d3" containerID="bf865df54dd7eea44cdf14782b35051e879ed53d58fecdb5dfaaad1b3e3ed384" exitCode=2 Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.904003 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerDied","Data":"bf865df54dd7eea44cdf14782b35051e879ed53d58fecdb5dfaaad1b3e3ed384"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.904559 4593 scope.go:117] "RemoveContainer" containerID="bf865df54dd7eea44cdf14782b35051e879ed53d58fecdb5dfaaad1b3e3ed384" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.922243 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7d7" event={"ID":"7ba9e41c-b01a-4d45-9272-24aca728f7bc","Type":"ContainerStarted","Data":"f8947bf8603825421d7767efdebe3e5aa280154ddb0198dabfc109bfedbfab57"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.945842 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb142e67-1809-4b4f-91d6-1c745a85cb13","Type":"ContainerStarted","Data":"ecf17e2b2f3453ee3e9aff90a681babab3e1dd6bb035e067992d73d5ba5adc5d"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.987412 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.990888 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"54e3f9bd-cf5f-4361-81b2-78571380f93f","Type":"ContainerDied","Data":"412596aea7ed79508efc55009f025bcad32104c84207c15c5c4be80493ef4961"} Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.990944 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412596aea7ed79508efc55009f025bcad32104c84207c15c5c4be80493ef4961" Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.995418 4593 generic.go:334] "Generic (PLEG): container finished" podID="e424f176-80e8-4029-a500-097e1d9e5b1e" containerID="daec26b82fedd17793042a2543f04b2bffe9792c65bc9d01520e1daaec56238e" exitCode=0 Jan 29 11:01:38 crc kubenswrapper[4593]: I0129 11:01:38.995494 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69z82" event={"ID":"e424f176-80e8-4029-a500-097e1d9e5b1e","Type":"ContainerDied","Data":"daec26b82fedd17793042a2543f04b2bffe9792c65bc9d01520e1daaec56238e"} Jan 29 11:01:39 crc kubenswrapper[4593]: I0129 11:01:38.999726 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqhd7" event={"ID":"d3be8312-dfdd-4359-b8c8-d9b8158fdab4","Type":"ContainerStarted","Data":"e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4"} Jan 29 11:01:39 crc kubenswrapper[4593]: I0129 11:01:39.195809 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=8.195787281 podStartE2EDuration="8.195787281s" podCreationTimestamp="2026-01-29 11:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:38.99332901 +0000 UTC m=+164.866363201" watchObservedRunningTime="2026-01-29 11:01:39.195787281 +0000 UTC m=+165.068821482" Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.039989 4593 generic.go:334] "Generic (PLEG): container finished" podID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" containerID="6a9a45884a6f1cc5b501c7194e0aa2ef03b9fa8ba41ecbcea41cfa16d1d8fa17" exitCode=0 Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.040973 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqhd7" event={"ID":"d3be8312-dfdd-4359-b8c8-d9b8158fdab4","Type":"ContainerDied","Data":"6a9a45884a6f1cc5b501c7194e0aa2ef03b9fa8ba41ecbcea41cfa16d1d8fa17"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.070620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" event={"ID":"066b2b93-4946-44cf-9757-05c8282cb7a3","Type":"ContainerStarted","Data":"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.071257 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.101060 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-6dlwj_5d8acfc6-0334-4294-8dd6-c3091ebb69d3/cluster-samples-operator/0.log" Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.101155 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6dlwj" event={"ID":"5d8acfc6-0334-4294-8dd6-c3091ebb69d3","Type":"ContainerStarted","Data":"77e1d9df33f67ff19f8f03931cc533ad69f68170903f08b1a53a441097e413ab"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.119365 4593 generic.go:334] "Generic (PLEG): container finished" podID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" containerID="3d931ac31836dde066a45b4cd0a61a0a245f5279e75d2cf3230380f6b7a7f2dc" exitCode=0 Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.119452 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7d7" event={"ID":"7ba9e41c-b01a-4d45-9272-24aca728f7bc","Type":"ContainerDied","Data":"3d931ac31836dde066a45b4cd0a61a0a245f5279e75d2cf3230380f6b7a7f2dc"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.149610 4593 generic.go:334] "Generic (PLEG): container finished" podID="fb142e67-1809-4b4f-91d6-1c745a85cb13" containerID="ecf17e2b2f3453ee3e9aff90a681babab3e1dd6bb035e067992d73d5ba5adc5d" exitCode=0 Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.149851 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb142e67-1809-4b4f-91d6-1c745a85cb13","Type":"ContainerDied","Data":"ecf17e2b2f3453ee3e9aff90a681babab3e1dd6bb035e067992d73d5ba5adc5d"} Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.150096 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" podStartSLOduration=144.150074975 podStartE2EDuration="2m24.150074975s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:40.146104594 +0000 UTC m=+166.019138785" watchObservedRunningTime="2026-01-29 11:01:40.150074975 +0000 UTC m=+166.023109186" Jan 29 11:01:40 crc kubenswrapper[4593]: I0129 11:01:40.354423 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7jm9m"] Jan 29 11:01:40 crc kubenswrapper[4593]: W0129 11:01:40.642183 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d229804_724c_4e21_89ac_e3369b615389.slice/crio-04db1eed2da4d96703a3194f7b01cfd7fc3b83eac5568838511496301cc46944 WatchSource:0}: Error finding container 04db1eed2da4d96703a3194f7b01cfd7fc3b83eac5568838511496301cc46944: Status 404 returned error can't find the container with id 04db1eed2da4d96703a3194f7b01cfd7fc3b83eac5568838511496301cc46944 Jan 29 11:01:41 crc kubenswrapper[4593]: I0129 11:01:41.174088 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" event={"ID":"7d229804-724c-4e21-89ac-e3369b615389","Type":"ContainerStarted","Data":"04db1eed2da4d96703a3194f7b01cfd7fc3b83eac5568838511496301cc46944"} Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.349930 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.397242 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") pod \"fb142e67-1809-4b4f-91d6-1c745a85cb13\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.397551 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") pod \"fb142e67-1809-4b4f-91d6-1c745a85cb13\" (UID: \"fb142e67-1809-4b4f-91d6-1c745a85cb13\") " Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.397804 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fb142e67-1809-4b4f-91d6-1c745a85cb13" (UID: "fb142e67-1809-4b4f-91d6-1c745a85cb13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.420805 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fb142e67-1809-4b4f-91d6-1c745a85cb13" (UID: "fb142e67-1809-4b4f-91d6-1c745a85cb13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.512611 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fb142e67-1809-4b4f-91d6-1c745a85cb13-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.512669 4593 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fb142e67-1809-4b4f-91d6-1c745a85cb13-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.806494 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" event={"ID":"7d229804-724c-4e21-89ac-e3369b615389","Type":"ContainerStarted","Data":"ec8e97d41005702c44c8ae632aed99d0a195511509305f7d4be2f5e066d8e1d4"} Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.808225 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"fb142e67-1809-4b4f-91d6-1c745a85cb13","Type":"ContainerDied","Data":"d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c"} Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.808280 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d087ad64cc04cfb6e08198781d86078cbfc1c528676b8d2ad6b759271367d41c" Jan 29 11:01:43 crc kubenswrapper[4593]: I0129 11:01:43.808547 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 11:01:45 crc kubenswrapper[4593]: I0129 11:01:45.278736 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7jm9m" podStartSLOduration=149.278701048 podStartE2EDuration="2m29.278701048s" podCreationTimestamp="2026-01-29 10:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:01:45.274312846 +0000 UTC m=+171.147347037" watchObservedRunningTime="2026-01-29 11:01:45.278701048 +0000 UTC m=+171.151735249" Jan 29 11:01:46 crc kubenswrapper[4593]: I0129 11:01:46.351859 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7jm9m" event={"ID":"7d229804-724c-4e21-89ac-e3369b615389","Type":"ContainerStarted","Data":"1cd590312d706f079cabb1272de333cf8d0b3327dd3dd6d04fccf2db0a4c47d9"} Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.217913 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.223448 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.936800 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.937213 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.936820 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.942844 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.942883 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.943482 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017"} pod="openshift-console/downloads-7954f5f757-t7wn4" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.943552 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" containerID="cri-o://bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017" gracePeriod=2 Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.947607 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:47 crc kubenswrapper[4593]: I0129 11:01:47.947660 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:48 crc kubenswrapper[4593]: I0129 11:01:48.483253 4593 generic.go:334] "Generic (PLEG): container finished" podID="fa5b3597-636e-4cf0-ad99-755378e23867" containerID="bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017" exitCode=0 Jan 29 11:01:48 crc kubenswrapper[4593]: I0129 11:01:48.483305 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerDied","Data":"bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017"} Jan 29 11:01:50 crc kubenswrapper[4593]: I0129 11:01:50.853497 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerStarted","Data":"80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63"} Jan 29 11:01:50 crc kubenswrapper[4593]: I0129 11:01:50.856351 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:01:50 crc kubenswrapper[4593]: I0129 11:01:50.856721 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:50 crc kubenswrapper[4593]: I0129 11:01:50.856770 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:52 crc kubenswrapper[4593]: I0129 11:01:52.140304 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:52 crc kubenswrapper[4593]: I0129 11:01:52.140604 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:53 crc kubenswrapper[4593]: I0129 11:01:53.168035 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:53 crc kubenswrapper[4593]: I0129 11:01:53.168136 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:56 crc kubenswrapper[4593]: I0129 11:01:56.178090 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.768413 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.768475 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.768952 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.768971 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:01:57 crc kubenswrapper[4593]: I0129 11:01:57.989145 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-m8dfr" Jan 29 11:01:59 crc kubenswrapper[4593]: I0129 11:01:59.505452 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:01:59 crc kubenswrapper[4593]: I0129 11:01:59.505956 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" containerID="cri-o://9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d" gracePeriod=30 Jan 29 11:01:59 crc kubenswrapper[4593]: I0129 11:01:59.617609 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:01:59 crc kubenswrapper[4593]: I0129 11:01:59.617858 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" containerID="cri-o://acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943" gracePeriod=30 Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.782584 4593 generic.go:334] "Generic (PLEG): container finished" podID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerID="acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943" exitCode=0 Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.782720 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" event={"ID":"a62104dd-d659-409a-b8f5-85aaf2856a14","Type":"ContainerDied","Data":"acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943"} Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.805835 4593 generic.go:334] "Generic (PLEG): container finished" podID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerID="9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d" exitCode=0 Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.805876 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" event={"ID":"76a22425-a78d-4304-b158-f577c6ef4c4f","Type":"ContainerDied","Data":"9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d"} Jan 29 11:02:00 crc kubenswrapper[4593]: I0129 11:02:00.875876 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196261 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:02:01 crc kubenswrapper[4593]: E0129 11:02:01.196530 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e3f9bd-cf5f-4361-81b2-78571380f93f" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196545 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e3f9bd-cf5f-4361-81b2-78571380f93f" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: E0129 11:02:01.196566 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196574 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" Jan 29 11:02:01 crc kubenswrapper[4593]: E0129 11:02:01.196591 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb142e67-1809-4b4f-91d6-1c745a85cb13" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196598 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb142e67-1809-4b4f-91d6-1c745a85cb13" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196742 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb142e67-1809-4b4f-91d6-1c745a85cb13" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196758 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" containerName="controller-manager" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.196772 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e3f9bd-cf5f-4361-81b2-78571380f93f" containerName="pruner" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.197153 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.197250 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.223945 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251032 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251140 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251204 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.251233 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") pod \"76a22425-a78d-4304-b158-f577c6ef4c4f\" (UID: \"76a22425-a78d-4304-b158-f577c6ef4c4f\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.252786 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.253408 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca" (OuterVolumeSpecName: "client-ca") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.254267 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config" (OuterVolumeSpecName: "config") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.309161 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx" (OuterVolumeSpecName: "kube-api-access-m95zx") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "kube-api-access-m95zx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.352425 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") pod \"a62104dd-d659-409a-b8f5-85aaf2856a14\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.352555 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") pod \"a62104dd-d659-409a-b8f5-85aaf2856a14\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.352708 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") pod \"a62104dd-d659-409a-b8f5-85aaf2856a14\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.352998 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") pod \"a62104dd-d659-409a-b8f5-85aaf2856a14\" (UID: \"a62104dd-d659-409a-b8f5-85aaf2856a14\") " Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.353520 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.353580 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.353620 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354012 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354191 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354396 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354410 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354421 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a22425-a78d-4304-b158-f577c6ef4c4f-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.354436 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m95zx\" (UniqueName: \"kubernetes.io/projected/76a22425-a78d-4304-b158-f577c6ef4c4f-kube-api-access-m95zx\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.355429 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config" (OuterVolumeSpecName: "config") pod "a62104dd-d659-409a-b8f5-85aaf2856a14" (UID: "a62104dd-d659-409a-b8f5-85aaf2856a14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.388089 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76a22425-a78d-4304-b158-f577c6ef4c4f" (UID: "76a22425-a78d-4304-b158-f577c6ef4c4f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.388417 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca" (OuterVolumeSpecName: "client-ca") pod "a62104dd-d659-409a-b8f5-85aaf2856a14" (UID: "a62104dd-d659-409a-b8f5-85aaf2856a14"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.396003 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6" (OuterVolumeSpecName: "kube-api-access-q2fn6") pod "a62104dd-d659-409a-b8f5-85aaf2856a14" (UID: "a62104dd-d659-409a-b8f5-85aaf2856a14"). InnerVolumeSpecName "kube-api-access-q2fn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.396541 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a62104dd-d659-409a-b8f5-85aaf2856a14" (UID: "a62104dd-d659-409a-b8f5-85aaf2856a14"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477071 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477211 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477244 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477273 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477329 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477343 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a62104dd-d659-409a-b8f5-85aaf2856a14-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477355 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a62104dd-d659-409a-b8f5-85aaf2856a14-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477366 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a22425-a78d-4304-b158-f577c6ef4c4f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.477377 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2fn6\" (UniqueName: \"kubernetes.io/projected/a62104dd-d659-409a-b8f5-85aaf2856a14-kube-api-access-q2fn6\") on node \"crc\" DevicePath \"\"" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.478610 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.479941 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.481505 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.491866 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.520093 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") pod \"controller-manager-6b89555d5-2xdxb\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.571682 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.872197 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.872846 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h" event={"ID":"a62104dd-d659-409a-b8f5-85aaf2856a14","Type":"ContainerDied","Data":"9eed55ee0a88f35fc2bf20b9123f7aae8a2cd1091b8b30b1223e2725c98e46d9"} Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.872943 4593 scope.go:117] "RemoveContainer" containerID="acbb97693467425ef2ea6a339415e6dda1d0d67a81e3c8acbbbd9196103ea943" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.879564 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" event={"ID":"76a22425-a78d-4304-b158-f577c6ef4c4f","Type":"ContainerDied","Data":"334a01364083a20e9cff55591ab0397980e71497fd4d2b540c48088a18808a8d"} Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.879695 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9td98" Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.921414 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.927604 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9td98"] Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.935077 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:02:01 crc kubenswrapper[4593]: I0129 11:02:01.939209 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fnv5h"] Jan 29 11:02:02 crc kubenswrapper[4593]: I0129 11:02:02.152595 4593 scope.go:117] "RemoveContainer" containerID="9eac3a17a0d80747b4c19589283eedb53fbdc19757a21659394b8e0db2f8d72d" Jan 29 11:02:02 crc kubenswrapper[4593]: I0129 11:02:02.487784 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:02:02 crc kubenswrapper[4593]: I0129 11:02:02.968778 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" event={"ID":"a4f38956-d909-4b11-8617-fd9fdcc92e10","Type":"ContainerStarted","Data":"21f6b5d0c55de6d3ac91b432cc366d4adadbf13bd4e64cace71084fab1fad375"} Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.101349 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a22425-a78d-4304-b158-f577c6ef4c4f" path="/var/lib/kubelet/pods/76a22425-a78d-4304-b158-f577c6ef4c4f/volumes" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.102253 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" path="/var/lib/kubelet/pods/a62104dd-d659-409a-b8f5-85aaf2856a14/volumes" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.359804 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:02:03 crc kubenswrapper[4593]: E0129 11:02:03.360079 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.360104 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.360233 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62104dd-d659-409a-b8f5-85aaf2856a14" containerName="route-controller-manager" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.360726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.363433 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.363611 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.363836 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.363957 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.364683 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.367958 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.381898 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.462661 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.462825 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.463008 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.463088 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.564499 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.570038 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.578478 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.578520 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.579464 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.578346 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.604819 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.605044 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") pod \"route-controller-manager-6d497cc759-5d7sb\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:03 crc kubenswrapper[4593]: I0129 11:02:03.700065 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:02:04 crc kubenswrapper[4593]: I0129 11:02:03.998683 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:02:04 crc kubenswrapper[4593]: I0129 11:02:04.003808 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:02:04 crc kubenswrapper[4593]: I0129 11:02:04.281469 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 11:02:04 crc kubenswrapper[4593]: I0129 11:02:04.297083 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" event={"ID":"a4f38956-d909-4b11-8617-fd9fdcc92e10","Type":"ContainerStarted","Data":"98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c"} Jan 29 11:02:05 crc kubenswrapper[4593]: I0129 11:02:05.639615 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:05 crc kubenswrapper[4593]: I0129 11:02:05.928872 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:05 crc kubenswrapper[4593]: I0129 11:02:05.984888 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podStartSLOduration=6.984871912 podStartE2EDuration="6.984871912s" podCreationTimestamp="2026-01-29 11:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:02:05.926901831 +0000 UTC m=+191.799936022" watchObservedRunningTime="2026-01-29 11:02:05.984871912 +0000 UTC m=+191.857906093" Jan 29 11:02:06 crc kubenswrapper[4593]: I0129 11:02:06.988943 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:02:07 crc kubenswrapper[4593]: I0129 11:02:07.802865 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:07 crc kubenswrapper[4593]: I0129 11:02:07.803081 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:07 crc kubenswrapper[4593]: I0129 11:02:07.803611 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:07 crc kubenswrapper[4593]: I0129 11:02:07.803647 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.545210 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.546534 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.549997 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.550181 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.557926 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.568815 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.569083 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.678046 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.678160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.678748 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.715755 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:14 crc kubenswrapper[4593]: I0129 11:02:14.875487 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.889759 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.890353 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.890405 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.889800 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.890774 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.891089 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63"} pod="openshift-console/downloads-7954f5f757-t7wn4" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.891122 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" containerID="cri-o://80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63" gracePeriod=2 Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.891146 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:17 crc kubenswrapper[4593]: I0129 11:02:17.891202 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:18 crc kubenswrapper[4593]: I0129 11:02:18.664926 4593 generic.go:334] "Generic (PLEG): container finished" podID="fa5b3597-636e-4cf0-ad99-755378e23867" containerID="80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63" exitCode=0 Jan 29 11:02:18 crc kubenswrapper[4593]: I0129 11:02:18.664996 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerDied","Data":"80496a0fb2ae3b38d3deddb71735982766589c1b4efad0d47eec09bc50b5dc63"} Jan 29 11:02:18 crc kubenswrapper[4593]: I0129 11:02:18.665294 4593 scope.go:117] "RemoveContainer" containerID="bf8a806e158e09e0a95b0c27cb110aaca87b007cd6e7c7a21d47ef28df322017" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.357492 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.358101 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" containerID="cri-o://98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c" gracePeriod=30 Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.369742 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.540920 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.541589 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.553579 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.713249 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.713349 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.713399 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.725251 4593 generic.go:334] "Generic (PLEG): container finished" podID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerID="98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c" exitCode=0 Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.725287 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" event={"ID":"a4f38956-d909-4b11-8617-fd9fdcc92e10","Type":"ContainerDied","Data":"98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c"} Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.815001 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.814925 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.815523 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.815692 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.815877 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.851030 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") pod \"installer-9-crc\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:19 crc kubenswrapper[4593]: I0129 11:02:19.925027 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:02:21 crc kubenswrapper[4593]: I0129 11:02:21.573755 4593 patch_prober.go:28] interesting pod/controller-manager-6b89555d5-2xdxb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 29 11:02:21 crc kubenswrapper[4593]: I0129 11:02:21.573827 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 29 11:02:27 crc kubenswrapper[4593]: I0129 11:02:27.799773 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:27 crc kubenswrapper[4593]: I0129 11:02:27.800403 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:31 crc kubenswrapper[4593]: I0129 11:02:31.572684 4593 patch_prober.go:28] interesting pod/controller-manager-6b89555d5-2xdxb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 29 11:02:31 crc kubenswrapper[4593]: I0129 11:02:31.572992 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 29 11:02:31 crc kubenswrapper[4593]: W0129 11:02:31.732951 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4378129_7124_43d0_a1a0_4085d0213d85.slice/crio-4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502 WatchSource:0}: Error finding container 4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502: Status 404 returned error can't find the container with id 4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502 Jan 29 11:02:32 crc kubenswrapper[4593]: I0129 11:02:32.049562 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" event={"ID":"f4378129-7124-43d0-a1a0-4085d0213d85","Type":"ContainerStarted","Data":"4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502"} Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.946539 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.946928 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.946973 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.947620 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:02:33 crc kubenswrapper[4593]: I0129 11:02:33.947680 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a" gracePeriod=600 Jan 29 11:02:35 crc kubenswrapper[4593]: I0129 11:02:35.210530 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a" exitCode=0 Jan 29 11:02:35 crc kubenswrapper[4593]: I0129 11:02:35.210584 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a"} Jan 29 11:02:37 crc kubenswrapper[4593]: I0129 11:02:37.768514 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:37 crc kubenswrapper[4593]: I0129 11:02:37.768900 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.183707 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jntfl"] Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.185163 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.201465 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jntfl"] Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.238586 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-tls\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.238878 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-trusted-ca\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239005 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc17831-117a-497d-bc13-b48ed5d95c90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239115 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2j4\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-kube-api-access-zv2j4\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239217 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239295 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc17831-117a-497d-bc13-b48ed5d95c90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239365 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-bound-sa-token\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.239453 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-certificates\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.276395 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340252 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-bound-sa-token\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340328 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-certificates\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340356 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-tls\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340383 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-trusted-ca\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340443 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc17831-117a-497d-bc13-b48ed5d95c90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340471 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv2j4\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-kube-api-access-zv2j4\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.340527 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc17831-117a-497d-bc13-b48ed5d95c90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.341308 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0fc17831-117a-497d-bc13-b48ed5d95c90-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.343148 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-certificates\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.366445 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0fc17831-117a-497d-bc13-b48ed5d95c90-trusted-ca\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.367076 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-registry-tls\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.368959 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-bound-sa-token\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.369484 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0fc17831-117a-497d-bc13-b48ed5d95c90-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.398250 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv2j4\" (UniqueName: \"kubernetes.io/projected/0fc17831-117a-497d-bc13-b48ed5d95c90-kube-api-access-zv2j4\") pod \"image-registry-66df7c8f76-jntfl\" (UID: \"0fc17831-117a-497d-bc13-b48ed5d95c90\") " pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.506729 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.573312 4593 patch_prober.go:28] interesting pod/controller-manager-6b89555d5-2xdxb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:02:42 crc kubenswrapper[4593]: I0129 11:02:42.573394 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:02:44 crc kubenswrapper[4593]: E0129 11:02:44.864212 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 11:02:44 crc kubenswrapper[4593]: E0129 11:02:44.865021 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7wxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fgg5s_openshift-marketplace(695d677a-4519-4ff0-9c6a-cbc902b00ee5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:02:44 crc kubenswrapper[4593]: E0129 11:02:44.866411 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fgg5s" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" Jan 29 11:02:47 crc kubenswrapper[4593]: I0129 11:02:47.768042 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:47 crc kubenswrapper[4593]: I0129 11:02:47.768317 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:52 crc kubenswrapper[4593]: I0129 11:02:52.174350 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:02:52 crc kubenswrapper[4593]: I0129 11:02:52.573287 4593 patch_prober.go:28] interesting pod/controller-manager-6b89555d5-2xdxb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 11:02:52 crc kubenswrapper[4593]: I0129 11:02:52.573620 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.286897 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fgg5s" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.371731 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.371898 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7j57m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qdz2v_openshift-marketplace(3d47516f-05e5-4f96-bf5a-c4251af51b6b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.373044 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qdz2v" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.403334 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.403519 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-879j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tm7d7_openshift-marketplace(7ba9e41c-b01a-4d45-9272-24aca728f7bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:02:53 crc kubenswrapper[4593]: E0129 11:02:53.404691 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-tm7d7" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.402928 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.410382 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.419125 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.430115 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.441425 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.441683 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" containerID="cri-o://134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868" gracePeriod=30 Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.447592 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.458555 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.469138 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s2rlp"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.470808 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.489152 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tm7d7" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.489356 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qdz2v" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.493664 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.509381 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.510706 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s2rlp"] Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.549564 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.549671 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt492\" (UniqueName: \"kubernetes.io/projected/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-kube-api-access-lt492\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.549722 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.553288 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.553436 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cc45l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-69z82_openshift-marketplace(e424f176-80e8-4029-a500-097e1d9e5b1e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:02:56 crc kubenswrapper[4593]: E0129 11:02:56.556489 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-69z82" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.650899 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.650992 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt492\" (UniqueName: \"kubernetes.io/projected/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-kube-api-access-lt492\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.651039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.652685 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.661675 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.673411 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt492\" (UniqueName: \"kubernetes.io/projected/7a59fe58-c900-46ea-8ff2-8a7f49210dc3-kube-api-access-lt492\") pod \"marketplace-operator-79b997595-s2rlp\" (UID: \"7a59fe58-c900-46ea-8ff2-8a7f49210dc3\") " pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.760737 4593 generic.go:334] "Generic (PLEG): container finished" podID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerID="134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868" exitCode=0 Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.760990 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" event={"ID":"0aa74baf-fde3-4dad-aef0-7b8b1ae90098","Type":"ContainerDied","Data":"134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868"} Jan 29 11:02:56 crc kubenswrapper[4593]: I0129 11:02:56.796070 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:02:57 crc kubenswrapper[4593]: I0129 11:02:57.769303 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:02:57 crc kubenswrapper[4593]: I0129 11:02:57.769350 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:02:57 crc kubenswrapper[4593]: I0129 11:02:57.991625 4593 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hw52m container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Jan 29 11:02:57 crc kubenswrapper[4593]: I0129 11:02:57.991699 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Jan 29 11:02:59 crc kubenswrapper[4593]: I0129 11:02:59.922777 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:02:59 crc kubenswrapper[4593]: I0129 11:02:59.927126 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:02:59 crc kubenswrapper[4593]: I0129 11:02:59.931108 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:02:59 crc kubenswrapper[4593]: I0129 11:02:59.995599 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.017872 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118101 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") pod \"e424f176-80e8-4029-a500-097e1d9e5b1e\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118193 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") pod \"e424f176-80e8-4029-a500-097e1d9e5b1e\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118219 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") pod \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118251 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118279 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") pod \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118300 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") pod \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118326 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118358 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118386 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118436 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") pod \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118488 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") pod \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\" (UID: \"3d47516f-05e5-4f96-bf5a-c4251af51b6b\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118513 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") pod \"a4f38956-d909-4b11-8617-fd9fdcc92e10\" (UID: \"a4f38956-d909-4b11-8617-fd9fdcc92e10\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118533 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") pod \"e424f176-80e8-4029-a500-097e1d9e5b1e\" (UID: \"e424f176-80e8-4029-a500-097e1d9e5b1e\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.118563 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") pod \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\" (UID: \"7ba9e41c-b01a-4d45-9272-24aca728f7bc\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.119097 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ba9e41c-b01a-4d45-9272-24aca728f7bc" (UID: "7ba9e41c-b01a-4d45-9272-24aca728f7bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.119457 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e424f176-80e8-4029-a500-097e1d9e5b1e" (UID: "e424f176-80e8-4029-a500-097e1d9e5b1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.120341 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities" (OuterVolumeSpecName: "utilities") pod "e424f176-80e8-4029-a500-097e1d9e5b1e" (UID: "e424f176-80e8-4029-a500-097e1d9e5b1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.121088 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities" (OuterVolumeSpecName: "utilities") pod "7ba9e41c-b01a-4d45-9272-24aca728f7bc" (UID: "7ba9e41c-b01a-4d45-9272-24aca728f7bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.124619 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.125272 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca" (OuterVolumeSpecName: "client-ca") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.125910 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities" (OuterVolumeSpecName: "utilities") pod "3d47516f-05e5-4f96-bf5a-c4251af51b6b" (UID: "3d47516f-05e5-4f96-bf5a-c4251af51b6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.131385 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config" (OuterVolumeSpecName: "config") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.132674 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m" (OuterVolumeSpecName: "kube-api-access-7j57m") pod "3d47516f-05e5-4f96-bf5a-c4251af51b6b" (UID: "3d47516f-05e5-4f96-bf5a-c4251af51b6b"). InnerVolumeSpecName "kube-api-access-7j57m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.133094 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d47516f-05e5-4f96-bf5a-c4251af51b6b" (UID: "3d47516f-05e5-4f96-bf5a-c4251af51b6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.134564 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.134791 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l" (OuterVolumeSpecName: "kube-api-access-cc45l") pod "e424f176-80e8-4029-a500-097e1d9e5b1e" (UID: "e424f176-80e8-4029-a500-097e1d9e5b1e"). InnerVolumeSpecName "kube-api-access-cc45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.134878 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2" (OuterVolumeSpecName: "kube-api-access-879j2") pod "7ba9e41c-b01a-4d45-9272-24aca728f7bc" (UID: "7ba9e41c-b01a-4d45-9272-24aca728f7bc"). InnerVolumeSpecName "kube-api-access-879j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.138964 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g" (OuterVolumeSpecName: "kube-api-access-8cb9g") pod "a4f38956-d909-4b11-8617-fd9fdcc92e10" (UID: "a4f38956-d909-4b11-8617-fd9fdcc92e10"). InnerVolumeSpecName "kube-api-access-8cb9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.233567 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") pod \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.233643 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") pod \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.233726 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") pod \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\" (UID: \"695d677a-4519-4ff0-9c6a-cbc902b00ee5\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234151 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234168 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e424f176-80e8-4029-a500-097e1d9e5b1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234181 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234193 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cb9g\" (UniqueName: \"kubernetes.io/projected/a4f38956-d909-4b11-8617-fd9fdcc92e10-kube-api-access-8cb9g\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234207 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234220 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-879j2\" (UniqueName: \"kubernetes.io/projected/7ba9e41c-b01a-4d45-9272-24aca728f7bc-kube-api-access-879j2\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234231 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234245 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4f38956-d909-4b11-8617-fd9fdcc92e10-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.236528 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "695d677a-4519-4ff0-9c6a-cbc902b00ee5" (UID: "695d677a-4519-4ff0-9c6a-cbc902b00ee5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.237178 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities" (OuterVolumeSpecName: "utilities") pod "695d677a-4519-4ff0-9c6a-cbc902b00ee5" (UID: "695d677a-4519-4ff0-9c6a-cbc902b00ee5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.234257 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249735 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d47516f-05e5-4f96-bf5a-c4251af51b6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249749 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j57m\" (UniqueName: \"kubernetes.io/projected/3d47516f-05e5-4f96-bf5a-c4251af51b6b-kube-api-access-7j57m\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249766 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4f38956-d909-4b11-8617-fd9fdcc92e10-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249962 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc45l\" (UniqueName: \"kubernetes.io/projected/e424f176-80e8-4029-a500-097e1d9e5b1e-kube-api-access-cc45l\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.249976 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ba9e41c-b01a-4d45-9272-24aca728f7bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.250058 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk" (OuterVolumeSpecName: "kube-api-access-t7wxk") pod "695d677a-4519-4ff0-9c6a-cbc902b00ee5" (UID: "695d677a-4519-4ff0-9c6a-cbc902b00ee5"). InnerVolumeSpecName "kube-api-access-t7wxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.350799 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7wxk\" (UniqueName: \"kubernetes.io/projected/695d677a-4519-4ff0-9c6a-cbc902b00ee5-kube-api-access-t7wxk\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.350831 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.350843 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/695d677a-4519-4ff0-9c6a-cbc902b00ee5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.604677 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 11:03:00 crc kubenswrapper[4593]: W0129 11:03:00.614607 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1e47dc9d_9af5_4d14_b8f3_f227d93c792d.slice/crio-811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee WatchSource:0}: Error finding container 811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee: Status 404 returned error can't find the container with id 811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.616426 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.629027 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.654162 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") pod \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.654215 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") pod \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.654250 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") pod \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\" (UID: \"0aa74baf-fde3-4dad-aef0-7b8b1ae90098\") " Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.657244 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0aa74baf-fde3-4dad-aef0-7b8b1ae90098" (UID: "0aa74baf-fde3-4dad-aef0-7b8b1ae90098"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.659851 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0aa74baf-fde3-4dad-aef0-7b8b1ae90098" (UID: "0aa74baf-fde3-4dad-aef0-7b8b1ae90098"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.667835 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6" (OuterVolumeSpecName: "kube-api-access-srcl6") pod "0aa74baf-fde3-4dad-aef0-7b8b1ae90098" (UID: "0aa74baf-fde3-4dad-aef0-7b8b1ae90098"). InnerVolumeSpecName "kube-api-access-srcl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.671859 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s2rlp"] Jan 29 11:03:00 crc kubenswrapper[4593]: W0129 11:03:00.674797 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc78186dc_c8e4_4018_8e50_f7fc0e719890.slice/crio-3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b WatchSource:0}: Error finding container 3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b: Status 404 returned error can't find the container with id 3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b Jan 29 11:03:00 crc kubenswrapper[4593]: W0129 11:03:00.677765 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a59fe58_c900_46ea_8ff2_8a7f49210dc3.slice/crio-55a51fe6ef01babc611d8975c87f095f629fd2120fbfeae87b8861d6aed6cbfe WatchSource:0}: Error finding container 55a51fe6ef01babc611d8975c87f095f629fd2120fbfeae87b8861d6aed6cbfe: Status 404 returned error can't find the container with id 55a51fe6ef01babc611d8975c87f095f629fd2120fbfeae87b8861d6aed6cbfe Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.748568 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749026 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749043 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749052 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749057 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749069 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749077 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749093 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749099 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749134 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749141 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: E0129 11:03:00.749150 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749155 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749320 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749331 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749339 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749350 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" containerName="controller-manager" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749359 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" containerName="extract-utilities" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749366 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" containerName="marketplace-operator" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.749791 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.755431 4593 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.755499 4593 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.755516 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srcl6\" (UniqueName: \"kubernetes.io/projected/0aa74baf-fde3-4dad-aef0-7b8b1ae90098-kube-api-access-srcl6\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.760619 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.801330 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qdz2v" event={"ID":"3d47516f-05e5-4f96-bf5a-c4251af51b6b","Type":"ContainerDied","Data":"96ef38f406756da164944fbca4b3b1aac366663320c1359747791a21ca1ed585"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.801678 4593 scope.go:117] "RemoveContainer" containerID="45fd11091e4829626417cd96b671777720a463c182e9d6f349c55edbbe7126c6" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.801794 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qdz2v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.820791 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c78186dc-c8e4-4018-8e50-f7fc0e719890","Type":"ContainerStarted","Data":"3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.823192 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jntfl"] Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.859214 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" event={"ID":"a4f38956-d909-4b11-8617-fd9fdcc92e10","Type":"ContainerDied","Data":"21f6b5d0c55de6d3ac91b432cc366d4adadbf13bd4e64cace71084fab1fad375"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.888918 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b89555d5-2xdxb" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.902347 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" event={"ID":"f4378129-7124-43d0-a1a0-4085d0213d85","Type":"ContainerStarted","Data":"56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.904746 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e47dc9d-9af5-4d14-b8f3-f227d93c792d","Type":"ContainerStarted","Data":"811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee"} Jan 29 11:03:00 crc kubenswrapper[4593]: W0129 11:03:00.906188 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc17831_117a_497d_bc13_b48ed5d95c90.slice/crio-9b647fad55bd50ea48e0f58ea14adbf61b46da149a2b2cc52b6c87e79960acd5 WatchSource:0}: Error finding container 9b647fad55bd50ea48e0f58ea14adbf61b46da149a2b2cc52b6c87e79960acd5: Status 404 returned error can't find the container with id 9b647fad55bd50ea48e0f58ea14adbf61b46da149a2b2cc52b6c87e79960acd5 Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.906779 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fgg5s" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.907534 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fgg5s" event={"ID":"695d677a-4519-4ff0-9c6a-cbc902b00ee5","Type":"ContainerDied","Data":"73c935e8b979b7dc8ab160b89b0aa92943613ba07d23ca3617474e48390b50f1"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.922100 4593 scope.go:117] "RemoveContainer" containerID="98ead1bf2f822aebadbb849468a6ff6ad9ad4689b0f1f94453177be952a2be7c" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.924896 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tm7d7" event={"ID":"7ba9e41c-b01a-4d45-9272-24aca728f7bc","Type":"ContainerDied","Data":"f8947bf8603825421d7767efdebe3e5aa280154ddb0198dabfc109bfedbfab57"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.924985 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tm7d7" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.934444 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" event={"ID":"7a59fe58-c900-46ea-8ff2-8a7f49210dc3","Type":"ContainerStarted","Data":"55a51fe6ef01babc611d8975c87f095f629fd2120fbfeae87b8861d6aed6cbfe"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.937384 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-69z82" event={"ID":"e424f176-80e8-4029-a500-097e1d9e5b1e","Type":"ContainerDied","Data":"eef621985e16727acc46b16908219680b25248fd848eacdfa61bcd853a7c18ac"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.937497 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-69z82" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.940211 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" event={"ID":"0aa74baf-fde3-4dad-aef0-7b8b1ae90098","Type":"ContainerDied","Data":"b58de0681837cbb0473d918da193d9a2ae22eb516c0709127c7bbdd54537d3ef"} Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.940297 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hw52m" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961102 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961181 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961204 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.961228 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.985070 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:03:00 crc kubenswrapper[4593]: I0129 11:03:00.992806 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b89555d5-2xdxb"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.057275 4593 scope.go:117] "RemoveContainer" containerID="b9d5c7d4701eae15759c1c9b230bf47aaf13c122f4acea86bd71b0030082917d" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.057261 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.057424 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwmr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w7gmb_openshift-marketplace(da7a9394-5c19-4a9e-9c6d-652b3ce08477): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.058531 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w7gmb" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.062897 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.062941 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.062968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.062989 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.063021 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.065576 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.067243 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.067488 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.068374 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.087260 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f38956-d909-4b11-8617-fd9fdcc92e10" path="/var/lib/kubelet/pods/a4f38956-d909-4b11-8617-fd9fdcc92e10/volumes" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.093379 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.102517 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qdz2v"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.108019 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.114381 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hw52m"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.114944 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") pod \"controller-manager-5b5b564f5c-4lr6v\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.150589 4593 scope.go:117] "RemoveContainer" containerID="3d931ac31836dde066a45b4cd0a61a0a245f5279e75d2cf3230380f6b7a7f2dc" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.150977 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.158587 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tm7d7"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.181164 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.193262 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fgg5s"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.201466 4593 scope.go:117] "RemoveContainer" containerID="daec26b82fedd17793042a2543f04b2bffe9792c65bc9d01520e1daaec56238e" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.227988 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.232518 4593 scope.go:117] "RemoveContainer" containerID="134cb2e4c5ab4b63e76188908744960f17a0602be1969f5d2c5bfb52e5ef0868" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.239579 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-69z82"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.386970 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.660965 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.697873 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.701740 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwkcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cqhd7_openshift-marketplace(d3be8312-dfdd-4359-b8c8-d9b8158fdab4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:03:01 crc kubenswrapper[4593]: E0129 11:03:01.702897 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cqhd7" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.823100 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kt56h"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.824612 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.826681 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.840312 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kt56h"] Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.950689 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-t7wn4" event={"ID":"fa5b3597-636e-4cf0-ad99-755378e23867","Type":"ContainerStarted","Data":"da6c305fc9b4c36ff1aec13c8062f2c0c0d8fc4e42de88cb5476d8e17fdd0fdc"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.951211 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.951284 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.951323 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.954951 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" event={"ID":"7a59fe58-c900-46ea-8ff2-8a7f49210dc3","Type":"ContainerStarted","Data":"1322e0b9140cfd25133d356253fbbffb5b8abfcdf97b1fb98dc5f672c80a5589"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.955174 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.959558 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" event={"ID":"0fc17831-117a-497d-bc13-b48ed5d95c90","Type":"ContainerStarted","Data":"ef53e07a0641f4e11c6001a1d0f9039045d18f8efa57411c7acf284a77d10665"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.959599 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" event={"ID":"0fc17831-117a-497d-bc13-b48ed5d95c90","Type":"ContainerStarted","Data":"9b647fad55bd50ea48e0f58ea14adbf61b46da149a2b2cc52b6c87e79960acd5"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.960428 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.964673 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.966406 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.972299 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c78186dc-c8e4-4018-8e50-f7fc0e719890","Type":"ContainerStarted","Data":"1944570fd0d711d5a3ddcb6c09ae1efbc4f659af6ced43239c4b6ab7e0c86a58"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.973214 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" event={"ID":"1b7bc172-8368-4c52-a739-34655c0e9686","Type":"ContainerStarted","Data":"a0d208891d18d712bd489561852a82f696e7d25c808617b7fe312d4e3430e177"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.984265 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-utilities\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.984313 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-catalog-content\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.984344 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjtg\" (UniqueName: \"kubernetes.io/projected/f0d1455d-ba27-48f0-be57-3d8e91a63853-kube-api-access-qjjtg\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.999606 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e47dc9d-9af5-4d14-b8f3-f227d93c792d","Type":"ContainerStarted","Data":"698371c58f150386702001acf70ee1dd100d06b388a9c7e51ab1417419f484f6"} Jan 29 11:03:01 crc kubenswrapper[4593]: I0129 11:03:01.999716 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" containerName="route-controller-manager" containerID="cri-o://56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51" gracePeriod=30 Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.000467 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.014112 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.086758 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-utilities\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.086818 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-catalog-content\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.086865 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjtg\" (UniqueName: \"kubernetes.io/projected/f0d1455d-ba27-48f0-be57-3d8e91a63853-kube-api-access-qjjtg\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.089383 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-utilities\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.089988 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d1455d-ba27-48f0-be57-3d8e91a63853-catalog-content\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.109689 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=48.109669326 podStartE2EDuration="48.109669326s" podCreationTimestamp="2026-01-29 11:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.108751631 +0000 UTC m=+247.981785822" watchObservedRunningTime="2026-01-29 11:03:02.109669326 +0000 UTC m=+247.982703517" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.128570 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjtg\" (UniqueName: \"kubernetes.io/projected/f0d1455d-ba27-48f0-be57-3d8e91a63853-kube-api-access-qjjtg\") pod \"certified-operators-kt56h\" (UID: \"f0d1455d-ba27-48f0-be57-3d8e91a63853\") " pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.156866 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=43.156843485 podStartE2EDuration="43.156843485s" podCreationTimestamp="2026-01-29 11:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.149705036 +0000 UTC m=+248.022739247" watchObservedRunningTime="2026-01-29 11:03:02.156843485 +0000 UTC m=+248.029877676" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.167938 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.255833 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s2rlp" podStartSLOduration=6.255817413 podStartE2EDuration="6.255817413s" podCreationTimestamp="2026-01-29 11:02:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.24536747 +0000 UTC m=+248.118401661" watchObservedRunningTime="2026-01-29 11:03:02.255817413 +0000 UTC m=+248.128851594" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.257749 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" podStartSLOduration=63.257743826 podStartE2EDuration="1m3.257743826s" podCreationTimestamp="2026-01-29 11:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.21352334 +0000 UTC m=+248.086557531" watchObservedRunningTime="2026-01-29 11:03:02.257743826 +0000 UTC m=+248.130778017" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.287360 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" podStartSLOduration=20.287343254 podStartE2EDuration="20.287343254s" podCreationTimestamp="2026-01-29 11:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:02.285163933 +0000 UTC m=+248.158198134" watchObservedRunningTime="2026-01-29 11:03:02.287343254 +0000 UTC m=+248.160377445" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.311069 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.311205 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spqr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lf9gr_openshift-marketplace(9c000e16-ab7a-4247-99da-74ea62d94b89): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.314779 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lf9gr" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.637582 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.667949 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kt56h"] Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.696498 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.701671 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") pod \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.701756 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") pod \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.701796 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") pod \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\" (UID: \"da7a9394-5c19-4a9e-9c6d-652b3ce08477\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.703085 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da7a9394-5c19-4a9e-9c6d-652b3ce08477" (UID: "da7a9394-5c19-4a9e-9c6d-652b3ce08477"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.704841 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities" (OuterVolumeSpecName: "utilities") pod "da7a9394-5c19-4a9e-9c6d-652b3ce08477" (UID: "da7a9394-5c19-4a9e-9c6d-652b3ce08477"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.714867 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4" (OuterVolumeSpecName: "kube-api-access-mwmr4") pod "da7a9394-5c19-4a9e-9c6d-652b3ce08477" (UID: "da7a9394-5c19-4a9e-9c6d-652b3ce08477"). InnerVolumeSpecName "kube-api-access-mwmr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.776989 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.777269 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5bhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tvwft_openshift-marketplace(6ce733ca-85e0-43f9-a444-9703d600da63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:03:02 crc kubenswrapper[4593]: E0129 11:03:02.778563 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tvwft" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.802664 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") pod \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.803010 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") pod \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.803830 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") pod \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\" (UID: \"d3be8312-dfdd-4359-b8c8-d9b8158fdab4\") " Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.804487 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.804614 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a9394-5c19-4a9e-9c6d-652b3ce08477-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.804711 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwmr4\" (UniqueName: \"kubernetes.io/projected/da7a9394-5c19-4a9e-9c6d-652b3ce08477-kube-api-access-mwmr4\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.803074 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3be8312-dfdd-4359-b8c8-d9b8158fdab4" (UID: "d3be8312-dfdd-4359-b8c8-d9b8158fdab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.803785 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities" (OuterVolumeSpecName: "utilities") pod "d3be8312-dfdd-4359-b8c8-d9b8158fdab4" (UID: "d3be8312-dfdd-4359-b8c8-d9b8158fdab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.809172 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz" (OuterVolumeSpecName: "kube-api-access-vwkcz") pod "d3be8312-dfdd-4359-b8c8-d9b8158fdab4" (UID: "d3be8312-dfdd-4359-b8c8-d9b8158fdab4"). InnerVolumeSpecName "kube-api-access-vwkcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.906582 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.907006 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwkcz\" (UniqueName: \"kubernetes.io/projected/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-kube-api-access-vwkcz\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:02 crc kubenswrapper[4593]: I0129 11:03:02.907022 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3be8312-dfdd-4359-b8c8-d9b8158fdab4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.005493 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4378129-7124-43d0-a1a0-4085d0213d85" containerID="56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51" exitCode=0 Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.005576 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" event={"ID":"f4378129-7124-43d0-a1a0-4085d0213d85","Type":"ContainerDied","Data":"56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.005607 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" event={"ID":"f4378129-7124-43d0-a1a0-4085d0213d85","Type":"ContainerDied","Data":"4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.005619 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b801c5d5fcdc244600a5adf83fd979dc53a8e86763b672bd2bec0c0db5bb502" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.006857 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cqhd7" event={"ID":"d3be8312-dfdd-4359-b8c8-d9b8158fdab4","Type":"ContainerDied","Data":"e3ed61cb166abee85a5cafd4f482b1fd984051495892cd7e58f5727be894ede4"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.006882 4593 scope.go:117] "RemoveContainer" containerID="6a9a45884a6f1cc5b501c7194e0aa2ef03b9fa8ba41ecbcea41cfa16d1d8fa17" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.007007 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cqhd7" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.015369 4593 generic.go:334] "Generic (PLEG): container finished" podID="1e47dc9d-9af5-4d14-b8f3-f227d93c792d" containerID="698371c58f150386702001acf70ee1dd100d06b388a9c7e51ab1417419f484f6" exitCode=0 Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.015430 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e47dc9d-9af5-4d14-b8f3-f227d93c792d","Type":"ContainerDied","Data":"698371c58f150386702001acf70ee1dd100d06b388a9c7e51ab1417419f484f6"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.020876 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w7gmb" event={"ID":"da7a9394-5c19-4a9e-9c6d-652b3ce08477","Type":"ContainerDied","Data":"72aa027856b0ef03a57066a814eb40eddf13ecfd2d1c62024902a4d79111cf83"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.020973 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w7gmb" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.036785 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" event={"ID":"1b7bc172-8368-4c52-a739-34655c0e9686","Type":"ContainerStarted","Data":"efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.037706 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.038408 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0d1455d-ba27-48f0-be57-3d8e91a63853" containerID="da9803603a32c2b1706f9f56f2f7fd646c19157b252303218bfff0d2077cf305" exitCode=0 Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.038613 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.038651 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt56h" event={"ID":"f0d1455d-ba27-48f0-be57-3d8e91a63853","Type":"ContainerDied","Data":"da9803603a32c2b1706f9f56f2f7fd646c19157b252303218bfff0d2077cf305"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.038667 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt56h" event={"ID":"f0d1455d-ba27-48f0-be57-3d8e91a63853","Type":"ContainerStarted","Data":"c0f3efbce7e67af8cb25c4825c2bac1610293b1ae77dcc4e6435612734c04f47"} Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.045337 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.045373 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.045662 4593 scope.go:117] "RemoveContainer" containerID="1bf75ace58181af9f0cccb28ad84d5dd8c16c8b69d21079288e4029c1048cd89" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.051115 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.094150 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa74baf-fde3-4dad-aef0-7b8b1ae90098" path="/var/lib/kubelet/pods/0aa74baf-fde3-4dad-aef0-7b8b1ae90098/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.094754 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d47516f-05e5-4f96-bf5a-c4251af51b6b" path="/var/lib/kubelet/pods/3d47516f-05e5-4f96-bf5a-c4251af51b6b/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.095301 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695d677a-4519-4ff0-9c6a-cbc902b00ee5" path="/var/lib/kubelet/pods/695d677a-4519-4ff0-9c6a-cbc902b00ee5/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.097326 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba9e41c-b01a-4d45-9272-24aca728f7bc" path="/var/lib/kubelet/pods/7ba9e41c-b01a-4d45-9272-24aca728f7bc/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.097836 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e424f176-80e8-4029-a500-097e1d9e5b1e" path="/var/lib/kubelet/pods/e424f176-80e8-4029-a500-097e1d9e5b1e/volumes" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.113876 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.113990 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") pod \"f4378129-7124-43d0-a1a0-4085d0213d85\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.114035 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") pod \"f4378129-7124-43d0-a1a0-4085d0213d85\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.114164 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") pod \"f4378129-7124-43d0-a1a0-4085d0213d85\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.114208 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") pod \"f4378129-7124-43d0-a1a0-4085d0213d85\" (UID: \"f4378129-7124-43d0-a1a0-4085d0213d85\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.116217 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca" (OuterVolumeSpecName: "client-ca") pod "f4378129-7124-43d0-a1a0-4085d0213d85" (UID: "f4378129-7124-43d0-a1a0-4085d0213d85"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.117202 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config" (OuterVolumeSpecName: "config") pod "f4378129-7124-43d0-a1a0-4085d0213d85" (UID: "f4378129-7124-43d0-a1a0-4085d0213d85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.125418 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj" (OuterVolumeSpecName: "kube-api-access-rwlfj") pod "f4378129-7124-43d0-a1a0-4085d0213d85" (UID: "f4378129-7124-43d0-a1a0-4085d0213d85"). InnerVolumeSpecName "kube-api-access-rwlfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.125744 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f4378129-7124-43d0-a1a0-4085d0213d85" (UID: "f4378129-7124-43d0-a1a0-4085d0213d85"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.131354 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cqhd7"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.158048 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" podStartSLOduration=4.15802795 podStartE2EDuration="4.15802795s" podCreationTimestamp="2026-01-29 11:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:03.146085836 +0000 UTC m=+249.019120047" watchObservedRunningTime="2026-01-29 11:03:03.15802795 +0000 UTC m=+249.031062141" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.217927 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwlfj\" (UniqueName: \"kubernetes.io/projected/f4378129-7124-43d0-a1a0-4085d0213d85-kube-api-access-rwlfj\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.217972 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.218006 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4378129-7124-43d0-a1a0-4085d0213d85-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.218015 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4378129-7124-43d0-a1a0-4085d0213d85-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.220282 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.232844 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w7gmb"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.336914 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.396261 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420121 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") pod \"9c000e16-ab7a-4247-99da-74ea62d94b89\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420263 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") pod \"6ce733ca-85e0-43f9-a444-9703d600da63\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420291 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") pod \"9c000e16-ab7a-4247-99da-74ea62d94b89\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420308 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") pod \"6ce733ca-85e0-43f9-a444-9703d600da63\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420333 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") pod \"9c000e16-ab7a-4247-99da-74ea62d94b89\" (UID: \"9c000e16-ab7a-4247-99da-74ea62d94b89\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.420351 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") pod \"6ce733ca-85e0-43f9-a444-9703d600da63\" (UID: \"6ce733ca-85e0-43f9-a444-9703d600da63\") " Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.421367 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities" (OuterVolumeSpecName: "utilities") pod "6ce733ca-85e0-43f9-a444-9703d600da63" (UID: "6ce733ca-85e0-43f9-a444-9703d600da63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.421442 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c000e16-ab7a-4247-99da-74ea62d94b89" (UID: "9c000e16-ab7a-4247-99da-74ea62d94b89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.422045 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ce733ca-85e0-43f9-a444-9703d600da63" (UID: "6ce733ca-85e0-43f9-a444-9703d600da63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.422567 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities" (OuterVolumeSpecName: "utilities") pod "9c000e16-ab7a-4247-99da-74ea62d94b89" (UID: "9c000e16-ab7a-4247-99da-74ea62d94b89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.425811 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2" (OuterVolumeSpecName: "kube-api-access-spqr2") pod "9c000e16-ab7a-4247-99da-74ea62d94b89" (UID: "9c000e16-ab7a-4247-99da-74ea62d94b89"). InnerVolumeSpecName "kube-api-access-spqr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.425942 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb" (OuterVolumeSpecName: "kube-api-access-p5bhb") pod "6ce733ca-85e0-43f9-a444-9703d600da63" (UID: "6ce733ca-85e0-43f9-a444-9703d600da63"). InnerVolumeSpecName "kube-api-access-p5bhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522050 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522082 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522092 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5bhb\" (UniqueName: \"kubernetes.io/projected/6ce733ca-85e0-43f9-a444-9703d600da63-kube-api-access-p5bhb\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522104 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spqr2\" (UniqueName: \"kubernetes.io/projected/9c000e16-ab7a-4247-99da-74ea62d94b89-kube-api-access-spqr2\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522112 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ce733ca-85e0-43f9-a444-9703d600da63-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.522120 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c000e16-ab7a-4247-99da-74ea62d94b89-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617443 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vbjtl"] Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617675 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" containerName="route-controller-manager" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617687 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" containerName="route-controller-manager" Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617696 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617702 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617714 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617720 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617728 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617733 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: E0129 11:03:03.617748 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617755 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617840 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617849 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617861 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617867 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" containerName="extract-utilities" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.617877 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" containerName="route-controller-manager" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.618564 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.625725 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.640120 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vbjtl"] Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.725250 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-utilities\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.725330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lqx\" (UniqueName: \"kubernetes.io/projected/954251cb-5bea-456e-8d36-27eda2fe92d6-kube-api-access-z9lqx\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.725381 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-catalog-content\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.833146 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9lqx\" (UniqueName: \"kubernetes.io/projected/954251cb-5bea-456e-8d36-27eda2fe92d6-kube-api-access-z9lqx\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.833211 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-catalog-content\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.833242 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-utilities\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.833620 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-utilities\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.834117 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/954251cb-5bea-456e-8d36-27eda2fe92d6-catalog-content\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.853969 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9lqx\" (UniqueName: \"kubernetes.io/projected/954251cb-5bea-456e-8d36-27eda2fe92d6-kube-api-access-z9lqx\") pod \"redhat-operators-vbjtl\" (UID: \"954251cb-5bea-456e-8d36-27eda2fe92d6\") " pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:03 crc kubenswrapper[4593]: I0129 11:03:03.933194 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.051719 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lf9gr" event={"ID":"9c000e16-ab7a-4247-99da-74ea62d94b89","Type":"ContainerDied","Data":"e852468ceed93d241feec7b7965eaf616d41cdfd72c07bd89b3ac0aca81937b9"} Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.052042 4593 scope.go:117] "RemoveContainer" containerID="8e093f0363d31a3b87d3f9991c3433e34b34cbb53e07ea1c58a964d993b8be1a" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.052150 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lf9gr" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.063398 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tvwft" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.063438 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tvwft" event={"ID":"6ce733ca-85e0-43f9-a444-9703d600da63","Type":"ContainerDied","Data":"5a2bdd7e5cb75db5cc0318b63cd7ca3e8135afeaf117d553a67933c149ec867e"} Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.067052 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.071331 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.071378 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.119706 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.128913 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lf9gr"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.140353 4593 scope.go:117] "RemoveContainer" containerID="ee4825fff37e0ca04b8b8e3c87e01fed5f500f91478778493b455fcf75dfd5d6" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.145014 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.148211 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d497cc759-5d7sb"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.193501 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.197917 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tvwft"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.371335 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:03:04 crc kubenswrapper[4593]: W0129 11:03:04.389599 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod954251cb_5bea_456e_8d36_27eda2fe92d6.slice/crio-0b1a9e6769d710e77157cd15808fc586479abe3e668b2515e7a6df15a8295d3a WatchSource:0}: Error finding container 0b1a9e6769d710e77157cd15808fc586479abe3e668b2515e7a6df15a8295d3a: Status 404 returned error can't find the container with id 0b1a9e6769d710e77157cd15808fc586479abe3e668b2515e7a6df15a8295d3a Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.392957 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vbjtl"] Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.561760 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") pod \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.562066 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") pod \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\" (UID: \"1e47dc9d-9af5-4d14-b8f3-f227d93c792d\") " Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.562512 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1e47dc9d-9af5-4d14-b8f3-f227d93c792d" (UID: "1e47dc9d-9af5-4d14-b8f3-f227d93c792d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.570931 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1e47dc9d-9af5-4d14-b8f3-f227d93c792d" (UID: "1e47dc9d-9af5-4d14-b8f3-f227d93c792d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.663971 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:04 crc kubenswrapper[4593]: I0129 11:03:04.664891 4593 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1e47dc9d-9af5-4d14-b8f3-f227d93c792d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.074069 4593 generic.go:334] "Generic (PLEG): container finished" podID="954251cb-5bea-456e-8d36-27eda2fe92d6" containerID="dc67b1b441df9db7285d242722d5600d9639c1caa2a14882031e742233b35a0f" exitCode=0 Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.085443 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.086550 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ce733ca-85e0-43f9-a444-9703d600da63" path="/var/lib/kubelet/pods/6ce733ca-85e0-43f9-a444-9703d600da63/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.089004 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c000e16-ab7a-4247-99da-74ea62d94b89" path="/var/lib/kubelet/pods/9c000e16-ab7a-4247-99da-74ea62d94b89/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.089831 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3be8312-dfdd-4359-b8c8-d9b8158fdab4" path="/var/lib/kubelet/pods/d3be8312-dfdd-4359-b8c8-d9b8158fdab4/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.090559 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7a9394-5c19-4a9e-9c6d-652b3ce08477" path="/var/lib/kubelet/pods/da7a9394-5c19-4a9e-9c6d-652b3ce08477/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.092319 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4378129-7124-43d0-a1a0-4085d0213d85" path="/var/lib/kubelet/pods/f4378129-7124-43d0-a1a0-4085d0213d85/volumes" Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.093270 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerDied","Data":"dc67b1b441df9db7285d242722d5600d9639c1caa2a14882031e742233b35a0f"} Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.093417 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerStarted","Data":"0b1a9e6769d710e77157cd15808fc586479abe3e668b2515e7a6df15a8295d3a"} Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.093522 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"1e47dc9d-9af5-4d14-b8f3-f227d93c792d","Type":"ContainerDied","Data":"811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee"} Jan 29 11:03:05 crc kubenswrapper[4593]: I0129 11:03:05.093625 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="811cedf4e5e4f52e17c53349ccf7b03f1591201b305a753a4c76009127c216ee" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.220528 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-57v5l"] Jan 29 11:03:06 crc kubenswrapper[4593]: E0129 11:03:06.221016 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e47dc9d-9af5-4d14-b8f3-f227d93c792d" containerName="pruner" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.221027 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e47dc9d-9af5-4d14-b8f3-f227d93c792d" containerName="pruner" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.221123 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e47dc9d-9af5-4d14-b8f3-f227d93c792d" containerName="pruner" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.221818 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.224886 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.240260 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-57v5l"] Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.286870 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-catalog-content\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.286948 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-utilities\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.286977 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whh4p\" (UniqueName: \"kubernetes.io/projected/3ae70d27-10ec-4015-851d-d84aaf99d782-kube-api-access-whh4p\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.387838 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-catalog-content\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.387924 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-utilities\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.387951 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whh4p\" (UniqueName: \"kubernetes.io/projected/3ae70d27-10ec-4015-851d-d84aaf99d782-kube-api-access-whh4p\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.388339 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-catalog-content\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.388611 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ae70d27-10ec-4015-851d-d84aaf99d782-utilities\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.411109 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whh4p\" (UniqueName: \"kubernetes.io/projected/3ae70d27-10ec-4015-851d-d84aaf99d782-kube-api-access-whh4p\") pod \"community-operators-57v5l\" (UID: \"3ae70d27-10ec-4015-851d-d84aaf99d782\") " pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.534993 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.787595 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.788706 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.791326 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.791644 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.791709 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.796187 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.799986 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.803768 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.804066 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.953315 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.953713 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.953764 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:06 crc kubenswrapper[4593]: I0129 11:03:06.953785 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.054930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.055000 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.055053 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.055078 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.056193 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.056586 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.062905 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.073306 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") pod \"route-controller-manager-58bf7649d7-2zw9b\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.097031 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0d1455d-ba27-48f0-be57-3d8e91a63853" containerID="90a3c8fe6e3b3c67889ebc6d5bc0e4f5101fb783bf937cb0cff6d2c277cde15e" exitCode=0 Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.097085 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt56h" event={"ID":"f0d1455d-ba27-48f0-be57-3d8e91a63853","Type":"ContainerDied","Data":"90a3c8fe6e3b3c67889ebc6d5bc0e4f5101fb783bf937cb0cff6d2c277cde15e"} Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.113133 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-57v5l"] Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.118989 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:07 crc kubenswrapper[4593]: W0129 11:03:07.122423 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ae70d27_10ec_4015_851d_d84aaf99d782.slice/crio-debeaa2cc637dd40f30edffd853e193912cfa521951ee9027867cd02cd495805 WatchSource:0}: Error finding container debeaa2cc637dd40f30edffd853e193912cfa521951ee9027867cd02cd495805: Status 404 returned error can't find the container with id debeaa2cc637dd40f30edffd853e193912cfa521951ee9027867cd02cd495805 Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.546484 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.767979 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.768290 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.768053 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:07 crc kubenswrapper[4593]: I0129 11:03:07.768338 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.103945 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" event={"ID":"0853e6a7-14da-4065-b7e5-4090e64c8335","Type":"ContainerStarted","Data":"bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb"} Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.104000 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" event={"ID":"0853e6a7-14da-4065-b7e5-4090e64c8335","Type":"ContainerStarted","Data":"21ade5a578e280b9b59a20196ece09521420534fe714ba11867382d7f37334ad"} Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.105193 4593 generic.go:334] "Generic (PLEG): container finished" podID="3ae70d27-10ec-4015-851d-d84aaf99d782" containerID="a4d7fe7f20fdaffdd69fd8fa9fd3f50b3a1065337b6fe8179e47e8a996045175" exitCode=0 Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.105226 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerDied","Data":"a4d7fe7f20fdaffdd69fd8fa9fd3f50b3a1065337b6fe8179e47e8a996045175"} Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.105246 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerStarted","Data":"debeaa2cc637dd40f30edffd853e193912cfa521951ee9027867cd02cd495805"} Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.621414 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2f96"] Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.622511 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.625775 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.676044 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2f96"] Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.790029 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-catalog-content\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.790166 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8gv\" (UniqueName: \"kubernetes.io/projected/69a313ce-b443-4080-9eea-bde0c61dc59d-kube-api-access-bs8gv\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.790193 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-utilities\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.891813 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8gv\" (UniqueName: \"kubernetes.io/projected/69a313ce-b443-4080-9eea-bde0c61dc59d-kube-api-access-bs8gv\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.891862 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-utilities\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.891893 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-catalog-content\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.892369 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-catalog-content\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.892705 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a313ce-b443-4080-9eea-bde0c61dc59d-utilities\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.911939 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8gv\" (UniqueName: \"kubernetes.io/projected/69a313ce-b443-4080-9eea-bde0c61dc59d-kube-api-access-bs8gv\") pod \"redhat-marketplace-v2f96\" (UID: \"69a313ce-b443-4080-9eea-bde0c61dc59d\") " pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:08 crc kubenswrapper[4593]: I0129 11:03:08.937777 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:09 crc kubenswrapper[4593]: I0129 11:03:09.125878 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:09 crc kubenswrapper[4593]: I0129 11:03:09.145702 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:09 crc kubenswrapper[4593]: I0129 11:03:09.208577 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" podStartSLOduration=10.208551221 podStartE2EDuration="10.208551221s" podCreationTimestamp="2026-01-29 11:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:09.204545909 +0000 UTC m=+255.077580100" watchObservedRunningTime="2026-01-29 11:03:09.208551221 +0000 UTC m=+255.081585412" Jan 29 11:03:09 crc kubenswrapper[4593]: I0129 11:03:09.563214 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2f96"] Jan 29 11:03:12 crc kubenswrapper[4593]: W0129 11:03:12.044060 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a313ce_b443_4080_9eea_bde0c61dc59d.slice/crio-a09bdfa43709fff979414ad3e2c68f9d117cc2abf495bde82517af8fdbd23fd2 WatchSource:0}: Error finding container a09bdfa43709fff979414ad3e2c68f9d117cc2abf495bde82517af8fdbd23fd2: Status 404 returned error can't find the container with id a09bdfa43709fff979414ad3e2c68f9d117cc2abf495bde82517af8fdbd23fd2 Jan 29 11:03:12 crc kubenswrapper[4593]: I0129 11:03:12.221823 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2f96" event={"ID":"69a313ce-b443-4080-9eea-bde0c61dc59d","Type":"ContainerStarted","Data":"a09bdfa43709fff979414ad3e2c68f9d117cc2abf495bde82517af8fdbd23fd2"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.251044 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerStarted","Data":"b01cf87c464002d003adad1df6433bb907f431ed214d1bcde8a84c6da9246667"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.253179 4593 generic.go:334] "Generic (PLEG): container finished" podID="69a313ce-b443-4080-9eea-bde0c61dc59d" containerID="fed25ad9139b9cfcd6fb12417440a8ebfc2bb9d954511884a4747cc4e7b08432" exitCode=0 Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.253246 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2f96" event={"ID":"69a313ce-b443-4080-9eea-bde0c61dc59d","Type":"ContainerDied","Data":"fed25ad9139b9cfcd6fb12417440a8ebfc2bb9d954511884a4747cc4e7b08432"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.256620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerStarted","Data":"0c86ba93f1ff030bcfb900d11758b1232ffa6e02adae8fe5018449d1c26ee3a9"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.259778 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" containerID="cri-o://0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c" gracePeriod=15 Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.271703 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kt56h" event={"ID":"f0d1455d-ba27-48f0-be57-3d8e91a63853","Type":"ContainerStarted","Data":"d3ae6c551b97e3c2a1aa5587184f94da8da17ffe874a2ca331b108bdd06a45e0"} Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.348395 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kt56h" podStartSLOduration=3.41084184 podStartE2EDuration="16.348376242s" podCreationTimestamp="2026-01-29 11:03:01 +0000 UTC" firstStartedPulling="2026-01-29 11:03:03.052985232 +0000 UTC m=+248.926019423" lastFinishedPulling="2026-01-29 11:03:15.990519594 +0000 UTC m=+261.863553825" observedRunningTime="2026-01-29 11:03:17.345888104 +0000 UTC m=+263.218922305" watchObservedRunningTime="2026-01-29 11:03:17.348376242 +0000 UTC m=+263.221410433" Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.413284 4593 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-ftchp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.413332 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.767467 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.767520 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.767813 4593 patch_prober.go:28] interesting pod/downloads-7954f5f757-t7wn4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Jan 29 11:03:17 crc kubenswrapper[4593]: I0129 11:03:17.767948 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-t7wn4" podUID="fa5b3597-636e-4cf0-ad99-755378e23867" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Jan 29 11:03:18 crc kubenswrapper[4593]: I0129 11:03:18.279193 4593 generic.go:334] "Generic (PLEG): container finished" podID="3ae70d27-10ec-4015-851d-d84aaf99d782" containerID="b01cf87c464002d003adad1df6433bb907f431ed214d1bcde8a84c6da9246667" exitCode=0 Jan 29 11:03:18 crc kubenswrapper[4593]: I0129 11:03:18.280476 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerDied","Data":"b01cf87c464002d003adad1df6433bb907f431ed214d1bcde8a84c6da9246667"} Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.262181 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.262383 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" containerID="cri-o://efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23" gracePeriod=30 Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.310751 4593 generic.go:334] "Generic (PLEG): container finished" podID="954251cb-5bea-456e-8d36-27eda2fe92d6" containerID="0c86ba93f1ff030bcfb900d11758b1232ffa6e02adae8fe5018449d1c26ee3a9" exitCode=0 Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.311523 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerDied","Data":"0c86ba93f1ff030bcfb900d11758b1232ffa6e02adae8fe5018449d1c26ee3a9"} Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.316746 4593 generic.go:334] "Generic (PLEG): container finished" podID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerID="0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c" exitCode=0 Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.316870 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" event={"ID":"e544204e-7186-4a22-a6bf-79a5101af4b6","Type":"ContainerDied","Data":"0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c"} Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.373778 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:19 crc kubenswrapper[4593]: I0129 11:03:19.373971 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerName="route-controller-manager" containerID="cri-o://bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb" gracePeriod=30 Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.241759 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.271852 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-75b7b58d79-s2j2l"] Jan 29 11:03:20 crc kubenswrapper[4593]: E0129 11:03:20.272094 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.272108 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.272225 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" containerName="oauth-openshift" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.272666 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.293472 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75b7b58d79-s2j2l"] Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.336669 4593 generic.go:334] "Generic (PLEG): container finished" podID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerID="bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb" exitCode=0 Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.336734 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" event={"ID":"0853e6a7-14da-4065-b7e5-4090e64c8335","Type":"ContainerDied","Data":"bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb"} Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.340248 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.340242 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-ftchp" event={"ID":"e544204e-7186-4a22-a6bf-79a5101af4b6","Type":"ContainerDied","Data":"0d7cf3673b86763198bedf6c07542fda69ead3075260207ea60dca64f8d8ae64"} Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.340439 4593 scope.go:117] "RemoveContainer" containerID="0951708a49a18c39b5089e8701a82e83976042f4ab61f945ea72ff61a2c3931c" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.342191 4593 generic.go:334] "Generic (PLEG): container finished" podID="1b7bc172-8368-4c52-a739-34655c0e9686" containerID="efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23" exitCode=0 Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.342230 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" event={"ID":"1b7bc172-8368-4c52-a739-34655c0e9686","Type":"ContainerDied","Data":"efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23"} Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429572 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429646 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429687 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429723 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429786 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429800 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429813 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429867 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429904 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.429951 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430017 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430042 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430081 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430102 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") pod \"e544204e-7186-4a22-a6bf-79a5101af4b6\" (UID: \"e544204e-7186-4a22-a6bf-79a5101af4b6\") " Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430259 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430301 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430323 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-login\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430346 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-session\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430376 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-service-ca\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430420 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-router-certs\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430448 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-error\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430481 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxws\" (UniqueName: \"kubernetes.io/projected/7fa6519b-42fa-4af8-a739-e77110dff723-kube-api-access-wzxws\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430505 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430535 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6519b-42fa-4af8-a739-e77110dff723-audit-dir\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430556 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-audit-policies\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430573 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430608 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430656 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.430709 4593 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.431201 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.431614 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.432072 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.434469 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.435002 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.435625 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.435946 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.436400 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.436745 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.437069 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.437841 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.439620 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.441235 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj" (OuterVolumeSpecName: "kube-api-access-q92mj") pod "e544204e-7186-4a22-a6bf-79a5101af4b6" (UID: "e544204e-7186-4a22-a6bf-79a5101af4b6"). InnerVolumeSpecName "kube-api-access-q92mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.532113 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.532628 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.532961 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-login\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.533804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-session\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.534133 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-service-ca\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.534044 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-cliconfig\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.534431 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-router-certs\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.534862 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-error\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.535002 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-service-ca\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.535328 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxws\" (UniqueName: \"kubernetes.io/projected/7fa6519b-42fa-4af8-a739-e77110dff723-kube-api-access-wzxws\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.535851 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.536189 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.536693 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6519b-42fa-4af8-a739-e77110dff723-audit-dir\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.537056 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-audit-policies\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.537360 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.537596 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.538997 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545314 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545349 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545364 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545382 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545396 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545410 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545427 4593 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545443 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q92mj\" (UniqueName: \"kubernetes.io/projected/e544204e-7186-4a22-a6bf-79a5101af4b6-kube-api-access-q92mj\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545457 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545470 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545486 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545500 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545513 4593 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e544204e-7186-4a22-a6bf-79a5101af4b6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.545136 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-login\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.538078 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-session\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.536799 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7fa6519b-42fa-4af8-a739-e77110dff723-audit-dir\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.538398 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-audit-policies\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.539042 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-error\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.537718 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-router-certs\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.540115 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.541127 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-serving-cert\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.542997 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.544173 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7fa6519b-42fa-4af8-a739-e77110dff723-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.561271 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxws\" (UniqueName: \"kubernetes.io/projected/7fa6519b-42fa-4af8-a739-e77110dff723-kube-api-access-wzxws\") pod \"oauth-openshift-75b7b58d79-s2j2l\" (UID: \"7fa6519b-42fa-4af8-a739-e77110dff723\") " pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.585695 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.681994 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.687067 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-ftchp"] Jan 29 11:03:20 crc kubenswrapper[4593]: I0129 11:03:20.994618 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-75b7b58d79-s2j2l"] Jan 29 11:03:20 crc kubenswrapper[4593]: W0129 11:03:20.996279 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fa6519b_42fa_4af8_a739_e77110dff723.slice/crio-6f90b6d65c0cdf2253109ad1469b11a32f0d8f181d8f0d1b056b56c2eb5e3b5c WatchSource:0}: Error finding container 6f90b6d65c0cdf2253109ad1469b11a32f0d8f181d8f0d1b056b56c2eb5e3b5c: Status 404 returned error can't find the container with id 6f90b6d65c0cdf2253109ad1469b11a32f0d8f181d8f0d1b056b56c2eb5e3b5c Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.051940 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.086403 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e544204e-7186-4a22-a6bf-79a5101af4b6" path="/var/lib/kubelet/pods/e544204e-7186-4a22-a6bf-79a5101af4b6/volumes" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.152189 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") pod \"0853e6a7-14da-4065-b7e5-4090e64c8335\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.153244 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config" (OuterVolumeSpecName: "config") pod "0853e6a7-14da-4065-b7e5-4090e64c8335" (UID: "0853e6a7-14da-4065-b7e5-4090e64c8335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.153328 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") pod \"0853e6a7-14da-4065-b7e5-4090e64c8335\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.153867 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") pod \"0853e6a7-14da-4065-b7e5-4090e64c8335\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.153912 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") pod \"0853e6a7-14da-4065-b7e5-4090e64c8335\" (UID: \"0853e6a7-14da-4065-b7e5-4090e64c8335\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.154132 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.154493 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca" (OuterVolumeSpecName: "client-ca") pod "0853e6a7-14da-4065-b7e5-4090e64c8335" (UID: "0853e6a7-14da-4065-b7e5-4090e64c8335"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.159106 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0853e6a7-14da-4065-b7e5-4090e64c8335" (UID: "0853e6a7-14da-4065-b7e5-4090e64c8335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.159181 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg" (OuterVolumeSpecName: "kube-api-access-gfsgg") pod "0853e6a7-14da-4065-b7e5-4090e64c8335" (UID: "0853e6a7-14da-4065-b7e5-4090e64c8335"). InnerVolumeSpecName "kube-api-access-gfsgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.255593 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0853e6a7-14da-4065-b7e5-4090e64c8335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.255654 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfsgg\" (UniqueName: \"kubernetes.io/projected/0853e6a7-14da-4065-b7e5-4090e64c8335-kube-api-access-gfsgg\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.255673 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0853e6a7-14da-4065-b7e5-4090e64c8335-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.350612 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" event={"ID":"7fa6519b-42fa-4af8-a739-e77110dff723","Type":"ContainerStarted","Data":"6f90b6d65c0cdf2253109ad1469b11a32f0d8f181d8f0d1b056b56c2eb5e3b5c"} Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.351950 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" event={"ID":"0853e6a7-14da-4065-b7e5-4090e64c8335","Type":"ContainerDied","Data":"21ade5a578e280b9b59a20196ece09521420534fe714ba11867382d7f37334ad"} Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.351980 4593 scope.go:117] "RemoveContainer" containerID="bc65351199a792aef25e18639b762df27be08050c27757be6a902bd41f818ecb" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.352077 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.381236 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.384385 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58bf7649d7-2zw9b"] Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.492081 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.558975 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.559056 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.559159 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.559211 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.559238 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") pod \"1b7bc172-8368-4c52-a739-34655c0e9686\" (UID: \"1b7bc172-8368-4c52-a739-34655c0e9686\") " Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.560004 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.560086 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config" (OuterVolumeSpecName: "config") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.560134 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca" (OuterVolumeSpecName: "client-ca") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.575924 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.575992 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp" (OuterVolumeSpecName: "kube-api-access-wmmdp") pod "1b7bc172-8368-4c52-a739-34655c0e9686" (UID: "1b7bc172-8368-4c52-a739-34655c0e9686"). InnerVolumeSpecName "kube-api-access-wmmdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660879 4593 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660934 4593 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660946 4593 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1b7bc172-8368-4c52-a739-34655c0e9686-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660958 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmmdp\" (UniqueName: \"kubernetes.io/projected/1b7bc172-8368-4c52-a739-34655c0e9686-kube-api-access-wmmdp\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:21 crc kubenswrapper[4593]: I0129 11:03:21.660972 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b7bc172-8368-4c52-a739-34655c0e9686-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.168699 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.169317 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.359413 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" event={"ID":"1b7bc172-8368-4c52-a739-34655c0e9686","Type":"ContainerDied","Data":"a0d208891d18d712bd489561852a82f696e7d25c808617b7fe312d4e3430e177"} Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.359462 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.359824 4593 scope.go:117] "RemoveContainer" containerID="efb497ce95c8b16f5f44e4fd898aa8797a4e7f63f9e2310f49fd9b1e6b2b5c23" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.363063 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" event={"ID":"7fa6519b-42fa-4af8-a739-e77110dff723","Type":"ContainerStarted","Data":"6e770a6481464a86de15f3f2462eee83bfaa47f18624d09d1bb8334e0c3a28c5"} Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.363412 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.386792 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" podStartSLOduration=30.386772095 podStartE2EDuration="30.386772095s" podCreationTimestamp="2026-01-29 11:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:22.385562932 +0000 UTC m=+268.258597123" watchObservedRunningTime="2026-01-29 11:03:22.386772095 +0000 UTC m=+268.259806296" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.387334 4593 patch_prober.go:28] interesting pod/controller-manager-5b5b564f5c-4lr6v container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded" start-of-body= Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.387393 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.415897 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.419500 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b5b564f5c-4lr6v"] Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.513296 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jntfl" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.610384 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl"] Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773226 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-784bc8c69-h6rvq"] Jan 29 11:03:22 crc kubenswrapper[4593]: E0129 11:03:22.773464 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerName="route-controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773479 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerName="route-controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: E0129 11:03:22.773495 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773503 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773649 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" containerName="route-controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.773664 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" containerName="controller-manager" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.774166 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.780804 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.781718 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.781728 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.781972 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.784505 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.784702 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.816039 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.878907 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784bc8c69-h6rvq"] Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890006 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-proxy-ca-bundles\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890065 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/6ddee183-1516-4cc4-96c3-ee15973bfd37-kube-api-access-hc7vc\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890110 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-config\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890136 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-client-ca\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.890162 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ddee183-1516-4cc4-96c3-ee15973bfd37-serving-cert\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.951856 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991673 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/6ddee183-1516-4cc4-96c3-ee15973bfd37-kube-api-access-hc7vc\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991734 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-config\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991766 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-client-ca\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991791 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ddee183-1516-4cc4-96c3-ee15973bfd37-serving-cert\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.991812 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-proxy-ca-bundles\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.992762 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-client-ca\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.992939 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-proxy-ca-bundles\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:22 crc kubenswrapper[4593]: I0129 11:03:22.993291 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ddee183-1516-4cc4-96c3-ee15973bfd37-config\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.001340 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ddee183-1516-4cc4-96c3-ee15973bfd37-serving-cert\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.035361 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/6ddee183-1516-4cc4-96c3-ee15973bfd37-kube-api-access-hc7vc\") pod \"controller-manager-784bc8c69-h6rvq\" (UID: \"6ddee183-1516-4cc4-96c3-ee15973bfd37\") " pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.051706 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-75b7b58d79-s2j2l" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.081251 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0853e6a7-14da-4065-b7e5-4090e64c8335" path="/var/lib/kubelet/pods/0853e6a7-14da-4065-b7e5-4090e64c8335/volumes" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.082133 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7bc172-8368-4c52-a739-34655c0e9686" path="/var/lib/kubelet/pods/1b7bc172-8368-4c52-a739-34655c0e9686/volumes" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.088539 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.467176 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kt56h" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.773751 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb"] Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.774544 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.776811 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.776877 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.776811 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.776932 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.777176 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.783534 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb"] Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.786104 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.903232 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrbn\" (UniqueName: \"kubernetes.io/projected/d6728980-2950-4c7e-b09d-cae4db914258-kube-api-access-nbrbn\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.903303 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6728980-2950-4c7e-b09d-cae4db914258-serving-cert\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.903329 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-config\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:23 crc kubenswrapper[4593]: I0129 11:03:23.903355 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-client-ca\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.004919 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrbn\" (UniqueName: \"kubernetes.io/projected/d6728980-2950-4c7e-b09d-cae4db914258-kube-api-access-nbrbn\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.005261 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6728980-2950-4c7e-b09d-cae4db914258-serving-cert\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.005355 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-config\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.005443 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-client-ca\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.006354 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-client-ca\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.006852 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6728980-2950-4c7e-b09d-cae4db914258-config\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.014804 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6728980-2950-4c7e-b09d-cae4db914258-serving-cert\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.021736 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrbn\" (UniqueName: \"kubernetes.io/projected/d6728980-2950-4c7e-b09d-cae4db914258-kube-api-access-nbrbn\") pod \"route-controller-manager-6dd454476b-t4npb\" (UID: \"d6728980-2950-4c7e-b09d-cae4db914258\") " pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:24 crc kubenswrapper[4593]: I0129 11:03:24.098786 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:27 crc kubenswrapper[4593]: I0129 11:03:27.774085 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-t7wn4" Jan 29 11:03:32 crc kubenswrapper[4593]: I0129 11:03:32.772489 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-784bc8c69-h6rvq"] Jan 29 11:03:32 crc kubenswrapper[4593]: I0129 11:03:32.779045 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb"] Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.440781 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" event={"ID":"d6728980-2950-4c7e-b09d-cae4db914258","Type":"ContainerStarted","Data":"1b2b9e787bfa050fc341035a11b4cf967f296b555dead5093c4663216ce62282"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.441792 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" event={"ID":"d6728980-2950-4c7e-b09d-cae4db914258","Type":"ContainerStarted","Data":"c25f16ca8313c76ec2eaad0c1786b65a4cf02ff766d8c72679738fd5de55b623"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.442413 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.443500 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-57v5l" event={"ID":"3ae70d27-10ec-4015-851d-d84aaf99d782","Type":"ContainerStarted","Data":"1ce53b2d0b99b2d6bb3eb602b1207e6091bd4890c409dac160c98e3d3e644ad4"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.445336 4593 generic.go:334] "Generic (PLEG): container finished" podID="69a313ce-b443-4080-9eea-bde0c61dc59d" containerID="4b372ce4759d57dd107215b9809c6dedc94cb89c19e57bfaa5d8813228456028" exitCode=0 Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.445410 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2f96" event={"ID":"69a313ce-b443-4080-9eea-bde0c61dc59d","Type":"ContainerDied","Data":"4b372ce4759d57dd107215b9809c6dedc94cb89c19e57bfaa5d8813228456028"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.449156 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vbjtl" event={"ID":"954251cb-5bea-456e-8d36-27eda2fe92d6","Type":"ContainerStarted","Data":"965f550baeaa01cf189d37cd289f67433885e86d9afdfae25850d9668a83e5eb"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.451592 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" event={"ID":"6ddee183-1516-4cc4-96c3-ee15973bfd37","Type":"ContainerStarted","Data":"817c7022ca4e52724cb75331da50e95d1974eac52c110d92826abd12ca66762a"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.451648 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" event={"ID":"6ddee183-1516-4cc4-96c3-ee15973bfd37","Type":"ContainerStarted","Data":"c95faa64d73eb92d048669d1a66e4c361409f4086b95ed49fd5e768d25706c2f"} Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.452599 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.460858 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.477748 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" podStartSLOduration=14.477708344 podStartE2EDuration="14.477708344s" podCreationTimestamp="2026-01-29 11:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:33.474281161 +0000 UTC m=+279.347315372" watchObservedRunningTime="2026-01-29 11:03:33.477708344 +0000 UTC m=+279.350742535" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.515291 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-57v5l" podStartSLOduration=4.6060970900000004 podStartE2EDuration="27.515277604s" podCreationTimestamp="2026-01-29 11:03:06 +0000 UTC" firstStartedPulling="2026-01-29 11:03:09.126709612 +0000 UTC m=+254.999743803" lastFinishedPulling="2026-01-29 11:03:32.035890126 +0000 UTC m=+277.908924317" observedRunningTime="2026-01-29 11:03:33.511972704 +0000 UTC m=+279.385006895" watchObservedRunningTime="2026-01-29 11:03:33.515277604 +0000 UTC m=+279.388311795" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.561921 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vbjtl" podStartSLOduration=4.12171246 podStartE2EDuration="30.56190273s" podCreationTimestamp="2026-01-29 11:03:03 +0000 UTC" firstStartedPulling="2026-01-29 11:03:05.700866881 +0000 UTC m=+251.573901072" lastFinishedPulling="2026-01-29 11:03:32.141057151 +0000 UTC m=+278.014091342" observedRunningTime="2026-01-29 11:03:33.560658406 +0000 UTC m=+279.433692597" watchObservedRunningTime="2026-01-29 11:03:33.56190273 +0000 UTC m=+279.434936911" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.608769 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-784bc8c69-h6rvq" podStartSLOduration=14.608749772 podStartE2EDuration="14.608749772s" podCreationTimestamp="2026-01-29 11:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:33.605782211 +0000 UTC m=+279.478816422" watchObservedRunningTime="2026-01-29 11:03:33.608749772 +0000 UTC m=+279.481783963" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.703650 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dd454476b-t4npb" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.934316 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:33 crc kubenswrapper[4593]: I0129 11:03:33.934505 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:34 crc kubenswrapper[4593]: I0129 11:03:34.461436 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2f96" event={"ID":"69a313ce-b443-4080-9eea-bde0c61dc59d","Type":"ContainerStarted","Data":"338033a6a905298191ca2e1da847e7c408756ddb734b172e1d817bed36172496"} Jan 29 11:03:34 crc kubenswrapper[4593]: I0129 11:03:34.483542 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2f96" podStartSLOduration=9.66208734 podStartE2EDuration="26.483508093s" podCreationTimestamp="2026-01-29 11:03:08 +0000 UTC" firstStartedPulling="2026-01-29 11:03:17.256858528 +0000 UTC m=+263.129892719" lastFinishedPulling="2026-01-29 11:03:34.078279281 +0000 UTC m=+279.951313472" observedRunningTime="2026-01-29 11:03:34.477533861 +0000 UTC m=+280.350568062" watchObservedRunningTime="2026-01-29 11:03:34.483508093 +0000 UTC m=+280.356542294" Jan 29 11:03:35 crc kubenswrapper[4593]: I0129 11:03:35.111464 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vbjtl" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" containerName="registry-server" probeResult="failure" output=< Jan 29 11:03:35 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:03:35 crc kubenswrapper[4593]: > Jan 29 11:03:36 crc kubenswrapper[4593]: I0129 11:03:36.536033 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:36 crc kubenswrapper[4593]: I0129 11:03:36.538657 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:36 crc kubenswrapper[4593]: I0129 11:03:36.580702 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:37 crc kubenswrapper[4593]: I0129 11:03:37.537369 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-57v5l" Jan 29 11:03:38 crc kubenswrapper[4593]: I0129 11:03:38.938995 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:38 crc kubenswrapper[4593]: I0129 11:03:38.939083 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:38 crc kubenswrapper[4593]: I0129 11:03:38.986020 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.367254 4593 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.368431 4593 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.368530 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369008 4593 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369173 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369189 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369198 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369206 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369214 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369220 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369227 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369233 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369241 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369247 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369257 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369262 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369270 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369275 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369367 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369378 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369386 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369395 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369403 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369410 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369419 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.369501 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.369507 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.400300 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486583 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486652 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486662 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486711 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.486741 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" gracePeriod=15 Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542685 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542745 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542782 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542821 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542840 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.542931 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.583657 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2f96" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.584405 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.584883 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.585188 4593 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.643807 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.643873 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.643904 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.643931 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644003 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644050 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644076 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644095 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644170 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644217 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644243 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644271 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644299 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.644849 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.645164 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.645216 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: I0129 11:03:39.695343 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:03:39 crc kubenswrapper[4593]: W0129 11:03:39.714224 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-31221c1349d3237fa02258feb4c5cf7aaa06324b121458a675e853b98f806479 WatchSource:0}: Error finding container 31221c1349d3237fa02258feb4c5cf7aaa06324b121458a675e853b98f806479: Status 404 returned error can't find the container with id 31221c1349d3237fa02258feb4c5cf7aaa06324b121458a675e853b98f806479 Jan 29 11:03:39 crc kubenswrapper[4593]: E0129 11:03:39.716743 4593 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f2ec912a95dbe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:03:39.716287934 +0000 UTC m=+285.589322135,LastTimestamp:2026-01-29 11:03:39.716287934 +0000 UTC m=+285.589322135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.493310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa"} Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.493732 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"31221c1349d3237fa02258feb4c5cf7aaa06324b121458a675e853b98f806479"} Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.494486 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.494977 4593 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:40 crc kubenswrapper[4593]: I0129 11:03:40.495517 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.007087 4593 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.007161 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.505175 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.507180 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.508043 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" exitCode=0 Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.508092 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" exitCode=0 Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.508104 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" exitCode=0 Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.508115 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" exitCode=2 Jan 29 11:03:41 crc kubenswrapper[4593]: I0129 11:03:41.509271 4593 scope.go:117] "RemoveContainer" containerID="68c0580ac8e5e3a5dc28e61c5e35215120f5f9807d8701ea6a72b7c26fd54709" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.479877 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.481662 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.482586 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.483135 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.483581 4593 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.516855 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.519004 4593 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" exitCode=0 Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.519060 4593 scope.go:117] "RemoveContainer" containerID="c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.519224 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.536042 4593 scope.go:117] "RemoveContainer" containerID="5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540148 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540188 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540219 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540284 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540282 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.540344 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.541428 4593 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.541451 4593 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.541461 4593 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.557166 4593 scope.go:117] "RemoveContainer" containerID="0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.576650 4593 scope.go:117] "RemoveContainer" containerID="d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.594954 4593 scope.go:117] "RemoveContainer" containerID="5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.611849 4593 scope.go:117] "RemoveContainer" containerID="f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.704843 4593 scope.go:117] "RemoveContainer" containerID="c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.707703 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\": container with ID starting with c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284 not found: ID does not exist" containerID="c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.707906 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284"} err="failed to get container status \"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\": rpc error: code = NotFound desc = could not find container \"c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284\": container with ID starting with c0cf22011101730e035a006d71048a573ab9514cbed5a8889df7fe86f0457284 not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.708016 4593 scope.go:117] "RemoveContainer" containerID="5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.709337 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\": container with ID starting with 5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a not found: ID does not exist" containerID="5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.709479 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a"} err="failed to get container status \"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\": rpc error: code = NotFound desc = could not find container \"5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a\": container with ID starting with 5a840467756964ce95bde18616244f0b479c55d1a9e8a2dc541997233175287a not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.709686 4593 scope.go:117] "RemoveContainer" containerID="0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.712286 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\": container with ID starting with 0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3 not found: ID does not exist" containerID="0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.712347 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3"} err="failed to get container status \"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\": rpc error: code = NotFound desc = could not find container \"0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3\": container with ID starting with 0cd8d52c074e9d0c74d2f09cdeb271c5dcd06eef024ceac97aac91f3a2bae8f3 not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.712376 4593 scope.go:117] "RemoveContainer" containerID="d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.713594 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\": container with ID starting with d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264 not found: ID does not exist" containerID="d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.713645 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264"} err="failed to get container status \"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\": rpc error: code = NotFound desc = could not find container \"d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264\": container with ID starting with d874aa85939f36b40d5e2df440655b2896f9e0da69caa429cdcfe00ddfa72264 not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.713668 4593 scope.go:117] "RemoveContainer" containerID="5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.715065 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\": container with ID starting with 5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9 not found: ID does not exist" containerID="5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.715088 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9"} err="failed to get container status \"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\": rpc error: code = NotFound desc = could not find container \"5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9\": container with ID starting with 5ebdbe4d96c79bc9c5e8261d3e547ec9fb09092cbb8ba82b5ca3a305add3f8c9 not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.715101 4593 scope.go:117] "RemoveContainer" containerID="f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece" Jan 29 11:03:42 crc kubenswrapper[4593]: E0129 11:03:42.715521 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\": container with ID starting with f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece not found: ID does not exist" containerID="f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.715553 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece"} err="failed to get container status \"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\": rpc error: code = NotFound desc = could not find container \"f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece\": container with ID starting with f207e2913e32bedb36802cde54219683287b76443f9fec2b81146aa8f54c3ece not found: ID does not exist" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.835184 4593 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.835823 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:42 crc kubenswrapper[4593]: I0129 11:03:42.836345 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.114172 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.971882 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.973164 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.973516 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:43 crc kubenswrapper[4593]: I0129 11:03:43.973840 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:44 crc kubenswrapper[4593]: I0129 11:03:44.011958 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vbjtl" Jan 29 11:03:44 crc kubenswrapper[4593]: I0129 11:03:44.012599 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:44 crc kubenswrapper[4593]: I0129 11:03:44.013278 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:44 crc kubenswrapper[4593]: I0129 11:03:44.014046 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.077338 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.078096 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.078547 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.541871 4593 generic.go:334] "Generic (PLEG): container finished" podID="c78186dc-c8e4-4018-8e50-f7fc0e719890" containerID="1944570fd0d711d5a3ddcb6c09ae1efbc4f659af6ced43239c4b6ab7e0c86a58" exitCode=0 Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.541927 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c78186dc-c8e4-4018-8e50-f7fc0e719890","Type":"ContainerDied","Data":"1944570fd0d711d5a3ddcb6c09ae1efbc4f659af6ced43239c4b6ab7e0c86a58"} Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.542773 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.543319 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.543976 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:45 crc kubenswrapper[4593]: I0129 11:03:45.544419 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.136721 4593 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f2ec912a95dbe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 11:03:39.716287934 +0000 UTC m=+285.589322135,LastTimestamp:2026-01-29 11:03:39.716287934 +0000 UTC m=+285.589322135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.765854 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.766095 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.766335 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.766558 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.767085 4593 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.767108 4593 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.767306 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.865502 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.866588 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.866895 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.867175 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.867416 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:46 crc kubenswrapper[4593]: E0129 11:03:46.968603 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.999249 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") pod \"c78186dc-c8e4-4018-8e50-f7fc0e719890\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.999392 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") pod \"c78186dc-c8e4-4018-8e50-f7fc0e719890\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " Jan 29 11:03:46 crc kubenswrapper[4593]: I0129 11:03:46.999508 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") pod \"c78186dc-c8e4-4018-8e50-f7fc0e719890\" (UID: \"c78186dc-c8e4-4018-8e50-f7fc0e719890\") " Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:46.999972 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock" (OuterVolumeSpecName: "var-lock") pod "c78186dc-c8e4-4018-8e50-f7fc0e719890" (UID: "c78186dc-c8e4-4018-8e50-f7fc0e719890"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.000099 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c78186dc-c8e4-4018-8e50-f7fc0e719890" (UID: "c78186dc-c8e4-4018-8e50-f7fc0e719890"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.005917 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c78186dc-c8e4-4018-8e50-f7fc0e719890" (UID: "c78186dc-c8e4-4018-8e50-f7fc0e719890"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.100893 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c78186dc-c8e4-4018-8e50-f7fc0e719890-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.100945 4593 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.100963 4593 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c78186dc-c8e4-4018-8e50-f7fc0e719890-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:47 crc kubenswrapper[4593]: E0129 11:03:47.369887 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.558812 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c78186dc-c8e4-4018-8e50-f7fc0e719890","Type":"ContainerDied","Data":"3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b"} Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.559239 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e9832e7b98d23dae1b2fb65f8187f83a370fb734395c68300087fa85959095b" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.558878 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.563242 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.563883 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.564263 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.564682 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:47 crc kubenswrapper[4593]: I0129 11:03:47.656352 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerName="registry" containerID="cri-o://b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" gracePeriod=30 Jan 29 11:03:48 crc kubenswrapper[4593]: E0129 11:03:48.170711 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.250695 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.251359 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.251654 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.251925 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.252197 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.252479 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317441 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317513 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317744 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317830 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317897 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317938 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.317978 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.318014 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") pod \"066b2b93-4946-44cf-9757-05c8282cb7a3\" (UID: \"066b2b93-4946-44cf-9757-05c8282cb7a3\") " Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.318476 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.318653 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.322379 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.322627 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.323037 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.327216 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9" (OuterVolumeSpecName: "kube-api-access-9stq9") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "kube-api-access-9stq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.334050 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.341610 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "066b2b93-4946-44cf-9757-05c8282cb7a3" (UID: "066b2b93-4946-44cf-9757-05c8282cb7a3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419833 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9stq9\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-kube-api-access-9stq9\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419863 4593 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419880 4593 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/066b2b93-4946-44cf-9757-05c8282cb7a3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419889 4593 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/066b2b93-4946-44cf-9757-05c8282cb7a3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419903 4593 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/066b2b93-4946-44cf-9757-05c8282cb7a3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419911 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.419920 4593 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/066b2b93-4946-44cf-9757-05c8282cb7a3-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567107 4593 generic.go:334] "Generic (PLEG): container finished" podID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerID="b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" exitCode=0 Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567148 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" event={"ID":"066b2b93-4946-44cf-9757-05c8282cb7a3","Type":"ContainerDied","Data":"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3"} Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567176 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" event={"ID":"066b2b93-4946-44cf-9757-05c8282cb7a3","Type":"ContainerDied","Data":"fb99d447e5189720ac881b538d20b70d4e3aef55d12b3a424d01a9dc39152640"} Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567191 4593 scope.go:117] "RemoveContainer" containerID="b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.567217 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.568403 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.568824 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.569205 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.569787 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.570285 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.585091 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.585698 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.586364 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.586745 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.587035 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.589034 4593 scope.go:117] "RemoveContainer" containerID="b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" Jan 29 11:03:48 crc kubenswrapper[4593]: E0129 11:03:48.589485 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3\": container with ID starting with b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3 not found: ID does not exist" containerID="b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3" Jan 29 11:03:48 crc kubenswrapper[4593]: I0129 11:03:48.589539 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3"} err="failed to get container status \"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3\": rpc error: code = NotFound desc = could not find container \"b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3\": container with ID starting with b0eae5ecd0f07f39d4a301805b28646763eb88458f87677425443839cbdb4cd3 not found: ID does not exist" Jan 29 11:03:49 crc kubenswrapper[4593]: E0129 11:03:49.771352 4593 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.074565 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.076418 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.076857 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.077584 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.079816 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.080436 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.092458 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.092501 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:51 crc kubenswrapper[4593]: E0129 11:03:51.093120 4593 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.093880 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.593862 4593 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f9f02392ece426d45bf04eadcad66ef551bcb96420b397c2e95276ccec2b5800" exitCode=0 Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.593956 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f9f02392ece426d45bf04eadcad66ef551bcb96420b397c2e95276ccec2b5800"} Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.594266 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7bc9277c29f0ea4f90bc30c23c8fafde6d0cd08135ba10b6c6165096d15d8a7a"} Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.594798 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.594853 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.595412 4593 status_manager.go:851] "Failed to get status for pod" podUID="954251cb-5bea-456e-8d36-27eda2fe92d6" pod="openshift-marketplace/redhat-operators-vbjtl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vbjtl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: E0129 11:03:51.595441 4593 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.596181 4593 status_manager.go:851] "Failed to get status for pod" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" pod="openshift-image-registry/image-registry-697d97f7c8-g72zl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-g72zl\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.596623 4593 status_manager.go:851] "Failed to get status for pod" podUID="69a313ce-b443-4080-9eea-bde0c61dc59d" pod="openshift-marketplace/redhat-marketplace-v2f96" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-v2f96\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.598484 4593 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:51 crc kubenswrapper[4593]: I0129 11:03:51.599056 4593 status_manager.go:851] "Failed to get status for pod" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.605108 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.605332 4593 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0" exitCode=1 Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.605374 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0"} Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.605987 4593 scope.go:117] "RemoveContainer" containerID="3aa8027b70a73d515102594b7d440e61393d7ab855128c9814922082754cdef0" Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.616445 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26499e684473ff4ac9eb0dedbbff033965a500a9a4276cf5a92c08e9fe64f96b"} Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.616489 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e6cf6f03687af6fbd5d29111ddbdaf274a7444ef5a36c54e812c6bc4d6bcf4b"} Jan 29 11:03:52 crc kubenswrapper[4593]: I0129 11:03:52.616499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aa647d83dabe5a3d79a19930063128a6f909621f5d5c41375de40be266f096f9"} Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.626540 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.626616 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb630f127dad9c772aa1b0d91c47433e7de976de011fabe9ef8cc269850f92de"} Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631471 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3cc85a8397a00dc41754449a054d8846ba6e9208d885de111d0af2960e7ea73b"} Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631514 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"53ba891dd20f4bbede831110f88317be0b1cb520878389c5750aedc2c2db2b51"} Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631806 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631812 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:53 crc kubenswrapper[4593]: I0129 11:03:53.631870 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:54 crc kubenswrapper[4593]: I0129 11:03:54.262849 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:03:54 crc kubenswrapper[4593]: I0129 11:03:54.668454 4593 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 11:03:56 crc kubenswrapper[4593]: I0129 11:03:56.094932 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:56 crc kubenswrapper[4593]: I0129 11:03:56.094985 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:56 crc kubenswrapper[4593]: I0129 11:03:56.100360 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:58 crc kubenswrapper[4593]: I0129 11:03:58.772849 4593 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:03:58 crc kubenswrapper[4593]: I0129 11:03:58.968255 4593 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a227a50d-4a52-4999-b737-d4a81267b353" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.674363 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.674396 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.678201 4593 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a227a50d-4a52-4999-b737-d4a81267b353" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.678558 4593 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://aa647d83dabe5a3d79a19930063128a6f909621f5d5c41375de40be266f096f9" Jan 29 11:03:59 crc kubenswrapper[4593]: I0129 11:03:59.678586 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.117787 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.122430 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.678461 4593 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.679130 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b28ebaa7-bd83-4239-8d22-71b82cdc8d0a" Jan 29 11:04:00 crc kubenswrapper[4593]: I0129 11:04:00.684299 4593 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a227a50d-4a52-4999-b737-d4a81267b353" Jan 29 11:04:04 crc kubenswrapper[4593]: I0129 11:04:04.254794 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 11:04:06 crc kubenswrapper[4593]: I0129 11:04:06.157202 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 11:04:07 crc kubenswrapper[4593]: I0129 11:04:07.850035 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.499171 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.522676 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.675454 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.856835 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 11:04:08 crc kubenswrapper[4593]: I0129 11:04:08.897774 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 11:04:09 crc kubenswrapper[4593]: I0129 11:04:09.115113 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 11:04:09 crc kubenswrapper[4593]: I0129 11:04:09.735360 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 11:04:09 crc kubenswrapper[4593]: I0129 11:04:09.769902 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 11:04:10 crc kubenswrapper[4593]: I0129 11:04:10.302318 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 11:04:10 crc kubenswrapper[4593]: I0129 11:04:10.818490 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.037520 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.039910 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.329921 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.339712 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.433420 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.488552 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.494104 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.738238 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 11:04:11 crc kubenswrapper[4593]: I0129 11:04:11.793103 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.013498 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.356879 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.390475 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.469974 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.475573 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.520900 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.599373 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 11:04:12 crc kubenswrapper[4593]: I0129 11:04:12.784511 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.159643 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.369401 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.486454 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.582573 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.587812 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.641998 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.647115 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.830901 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 11:04:13 crc kubenswrapper[4593]: I0129 11:04:13.913412 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.152753 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.165777 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.249185 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.260991 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.311548 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.376013 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.386461 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.484623 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.502797 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.623935 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.663683 4593 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.718990 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.741961 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.802675 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 11:04:14 crc kubenswrapper[4593]: I0129 11:04:14.867104 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.001857 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.052692 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.064608 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.071819 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.112869 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.131380 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.149669 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.181590 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.207303 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.241155 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.348657 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.375030 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.398095 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.404353 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.448838 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.471253 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.479280 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.536961 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.561201 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.579594 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.681546 4593 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.686391 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.686367028 podStartE2EDuration="36.686367028s" podCreationTimestamp="2026-01-29 11:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:03:58.917876055 +0000 UTC m=+304.790910246" watchObservedRunningTime="2026-01-29 11:04:15.686367028 +0000 UTC m=+321.559401209" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.689077 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-g72zl","openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.689270 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.694692 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.707065 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.70704616 podStartE2EDuration="17.70704616s" podCreationTimestamp="2026-01-29 11:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:04:15.706243848 +0000 UTC m=+321.579278059" watchObservedRunningTime="2026-01-29 11:04:15.70704616 +0000 UTC m=+321.580080351" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.723962 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.778843 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.780936 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.888010 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 11:04:15 crc kubenswrapper[4593]: I0129 11:04:15.994994 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.003409 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.069554 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.119730 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.332113 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.422673 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.465396 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.477539 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.516060 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.525918 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.606215 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.680864 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.752253 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.762336 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.770932 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.829724 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.896992 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 11:04:16 crc kubenswrapper[4593]: I0129 11:04:16.931265 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.017759 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.040227 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.044682 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.081496 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" path="/var/lib/kubelet/pods/066b2b93-4946-44cf-9757-05c8282cb7a3/volumes" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.117851 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.183813 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.199548 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.278852 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.314468 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.343146 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.367905 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.410580 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.581095 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.624505 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.662564 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.669546 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.693568 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.847098 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 11:04:17 crc kubenswrapper[4593]: I0129 11:04:17.912878 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.003911 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.010196 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.031037 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.064219 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.266688 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.288028 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.292075 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.302106 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.398714 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.497765 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.545338 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.662066 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.696931 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.755921 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.761445 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.805724 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.886338 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.915245 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.927159 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 11:04:18 crc kubenswrapper[4593]: I0129 11:04:18.930922 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.023241 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.028184 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.159192 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.163008 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.177304 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.212103 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.251654 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.254501 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.315244 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.325430 4593 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.374738 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.390820 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.396674 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.429712 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.451063 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.533155 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.591200 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.651542 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.653365 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.710515 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.731814 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.769763 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.778667 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.816606 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.819000 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.867981 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.883164 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 11:04:19 crc kubenswrapper[4593]: I0129 11:04:19.931836 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.014760 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.046774 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.167573 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.185621 4593 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.199469 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.220565 4593 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.220857 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" gracePeriod=5 Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.262671 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.423271 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.438235 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.449498 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.470326 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.471174 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.487974 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.532136 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.543578 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.597173 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.614993 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.653942 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.691574 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.723744 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.724744 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.758896 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.865082 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 11:04:20 crc kubenswrapper[4593]: I0129 11:04:20.952214 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.008655 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.044764 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.086672 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.087560 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.137151 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.151616 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.223403 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.235925 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.326824 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.365220 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.410360 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.515794 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.547146 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.580245 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.721850 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.837497 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.872450 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 11:04:21 crc kubenswrapper[4593]: I0129 11:04:21.882378 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.129028 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.140301 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.140946 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.148697 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.201128 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.205126 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.255596 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.310062 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.317897 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.318077 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.548626 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.554895 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.590960 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.621213 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.774240 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.860402 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 11:04:22 crc kubenswrapper[4593]: I0129 11:04:22.897607 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.038296 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.046734 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.069715 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.102438 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.198724 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.478211 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.688597 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.715777 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.741509 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.904942 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 11:04:23 crc kubenswrapper[4593]: I0129 11:04:23.992328 4593 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.163081 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.167161 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.320051 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.348411 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.486831 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 11:04:24 crc kubenswrapper[4593]: I0129 11:04:24.696064 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.015242 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.099477 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.188195 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.616183 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.671308 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.814459 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.814553 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.836923 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.871566 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.871690 4593 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" exitCode=137 Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.871730 4593 scope.go:117] "RemoveContainer" containerID="1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.871785 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885844 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885886 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885922 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885976 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886013 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.885972 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886017 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886083 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886091 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886467 4593 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886502 4593 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886514 4593 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.886527 4593 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.894763 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.894898 4593 scope.go:117] "RemoveContainer" containerID="1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" Jan 29 11:04:25 crc kubenswrapper[4593]: E0129 11:04:25.895470 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa\": container with ID starting with 1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa not found: ID does not exist" containerID="1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.895508 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa"} err="failed to get container status \"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa\": rpc error: code = NotFound desc = could not find container \"1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa\": container with ID starting with 1b0b3411fc4372f421b034e112b06a82b1bf3bfcd9f80166476dda55319b85fa not found: ID does not exist" Jan 29 11:04:25 crc kubenswrapper[4593]: I0129 11:04:25.988222 4593 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.273116 4593 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.637676 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.663626 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.686003 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.756959 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.779115 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 11:04:26 crc kubenswrapper[4593]: I0129 11:04:26.929750 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.082017 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.083337 4593 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.094228 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.094302 4593 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bfa76c00-a5b7-488b-b870-4e20971ef9ad" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.099220 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.099255 4593 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="bfa76c00-a5b7-488b-b870-4e20971ef9ad" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.174773 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 11:04:27 crc kubenswrapper[4593]: I0129 11:04:27.377976 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 11:05:03 crc kubenswrapper[4593]: I0129 11:05:03.946086 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:05:03 crc kubenswrapper[4593]: I0129 11:05:03.946608 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:05:33 crc kubenswrapper[4593]: I0129 11:05:33.946543 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:05:33 crc kubenswrapper[4593]: I0129 11:05:33.947204 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.946587 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.947239 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.947289 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.948279 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:06:03 crc kubenswrapper[4593]: I0129 11:06:03.948337 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50" gracePeriod=600 Jan 29 11:06:04 crc kubenswrapper[4593]: I0129 11:06:04.518699 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50" exitCode=0 Jan 29 11:06:04 crc kubenswrapper[4593]: I0129 11:06:04.518748 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50"} Jan 29 11:06:04 crc kubenswrapper[4593]: I0129 11:06:04.519118 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d"} Jan 29 11:06:04 crc kubenswrapper[4593]: I0129 11:06:04.519150 4593 scope.go:117] "RemoveContainer" containerID="85b9020695a7c5fb81719f47033bd1154f0b959e1b2d108c3c1e296bbcc3a52a" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.531124 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7s4r"] Jan 29 11:08:22 crc kubenswrapper[4593]: E0129 11:08:22.532895 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerName="registry" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533036 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerName="registry" Jan 29 11:08:22 crc kubenswrapper[4593]: E0129 11:08:22.533144 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533237 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:08:22 crc kubenswrapper[4593]: E0129 11:08:22.533329 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" containerName="installer" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533407 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" containerName="installer" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533596 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="066b2b93-4946-44cf-9757-05c8282cb7a3" containerName="registry" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533704 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.533776 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c78186dc-c8e4-4018-8e50-f7fc0e719890" containerName="installer" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.534250 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.535810 4593 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-g894x" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.536050 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.536174 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.540511 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.541335 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.543076 4593 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zv4cm" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.548260 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-qhfhj"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.548907 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.551086 4593 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-s8j76" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.554690 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7s4r"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.558258 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qhfhj"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.565056 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7"] Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.578874 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlcsr\" (UniqueName: \"kubernetes.io/projected/79aa2cc5-a031-412d-a4c7-ba9251f84ed6-kube-api-access-qlcsr\") pod \"cert-manager-cainjector-cf98fcc89-lw7j7\" (UID: \"79aa2cc5-a031-412d-a4c7-ba9251f84ed6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.578999 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvnnl\" (UniqueName: \"kubernetes.io/projected/59d387c2-4d0b-4d6c-a0d8-2230657bebd0-kube-api-access-bvnnl\") pod \"cert-manager-858654f9db-qhfhj\" (UID: \"59d387c2-4d0b-4d6c-a0d8-2230657bebd0\") " pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.579039 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbqw\" (UniqueName: \"kubernetes.io/projected/e2b5756a-c46e-4e76-90bf-0a5c7c1dc759-kube-api-access-rbbqw\") pod \"cert-manager-webhook-687f57d79b-t7s4r\" (UID: \"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.679768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvnnl\" (UniqueName: \"kubernetes.io/projected/59d387c2-4d0b-4d6c-a0d8-2230657bebd0-kube-api-access-bvnnl\") pod \"cert-manager-858654f9db-qhfhj\" (UID: \"59d387c2-4d0b-4d6c-a0d8-2230657bebd0\") " pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.679817 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbbqw\" (UniqueName: \"kubernetes.io/projected/e2b5756a-c46e-4e76-90bf-0a5c7c1dc759-kube-api-access-rbbqw\") pod \"cert-manager-webhook-687f57d79b-t7s4r\" (UID: \"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.679867 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlcsr\" (UniqueName: \"kubernetes.io/projected/79aa2cc5-a031-412d-a4c7-ba9251f84ed6-kube-api-access-qlcsr\") pod \"cert-manager-cainjector-cf98fcc89-lw7j7\" (UID: \"79aa2cc5-a031-412d-a4c7-ba9251f84ed6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.699264 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbbqw\" (UniqueName: \"kubernetes.io/projected/e2b5756a-c46e-4e76-90bf-0a5c7c1dc759-kube-api-access-rbbqw\") pod \"cert-manager-webhook-687f57d79b-t7s4r\" (UID: \"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.700368 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvnnl\" (UniqueName: \"kubernetes.io/projected/59d387c2-4d0b-4d6c-a0d8-2230657bebd0-kube-api-access-bvnnl\") pod \"cert-manager-858654f9db-qhfhj\" (UID: \"59d387c2-4d0b-4d6c-a0d8-2230657bebd0\") " pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.701485 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlcsr\" (UniqueName: \"kubernetes.io/projected/79aa2cc5-a031-412d-a4c7-ba9251f84ed6-kube-api-access-qlcsr\") pod \"cert-manager-cainjector-cf98fcc89-lw7j7\" (UID: \"79aa2cc5-a031-412d-a4c7-ba9251f84ed6\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.857439 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.873786 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" Jan 29 11:08:22 crc kubenswrapper[4593]: I0129 11:08:22.910004 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-qhfhj" Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.143164 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7"] Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.157869 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.223520 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-qhfhj"] Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.235087 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" event={"ID":"79aa2cc5-a031-412d-a4c7-ba9251f84ed6","Type":"ContainerStarted","Data":"1f5e72b8c35ebdaacdd09ea8ad8f6ceabc567826281d7b1c121b99d0d05a125d"} Jan 29 11:08:23 crc kubenswrapper[4593]: I0129 11:08:23.372625 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7s4r"] Jan 29 11:08:23 crc kubenswrapper[4593]: W0129 11:08:23.375223 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2b5756a_c46e_4e76_90bf_0a5c7c1dc759.slice/crio-6a0775c711ee74827909fd2c77d03c0743ccd6d20f9b74aa3332bf7d4b167510 WatchSource:0}: Error finding container 6a0775c711ee74827909fd2c77d03c0743ccd6d20f9b74aa3332bf7d4b167510: Status 404 returned error can't find the container with id 6a0775c711ee74827909fd2c77d03c0743ccd6d20f9b74aa3332bf7d4b167510 Jan 29 11:08:24 crc kubenswrapper[4593]: I0129 11:08:24.242207 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" event={"ID":"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759","Type":"ContainerStarted","Data":"6a0775c711ee74827909fd2c77d03c0743ccd6d20f9b74aa3332bf7d4b167510"} Jan 29 11:08:24 crc kubenswrapper[4593]: I0129 11:08:24.244497 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qhfhj" event={"ID":"59d387c2-4d0b-4d6c-a0d8-2230657bebd0","Type":"ContainerStarted","Data":"a7122287ba47f87676bebb1341fd9e131c0312f6a879f094c01013f66ecc40f3"} Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.274651 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" event={"ID":"79aa2cc5-a031-412d-a4c7-ba9251f84ed6","Type":"ContainerStarted","Data":"fd32d1d4a6d4706c4b7b8e0f3bc1d0422b7f1d9effaa3079f5a32565bc21c54c"} Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.276525 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" event={"ID":"e2b5756a-c46e-4e76-90bf-0a5c7c1dc759","Type":"ContainerStarted","Data":"7a6a7ee7ba6871741addb1938c5349767fcbe78536de29c611ba973ba8800f3b"} Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.276615 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.277906 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-qhfhj" event={"ID":"59d387c2-4d0b-4d6c-a0d8-2230657bebd0","Type":"ContainerStarted","Data":"31c0c240e391114a8b6f567a9d4aca5053c83f18bae943a421ee9339284d814c"} Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.290855 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lw7j7" podStartSLOduration=1.909984184 podStartE2EDuration="7.290832841s" podCreationTimestamp="2026-01-29 11:08:22 +0000 UTC" firstStartedPulling="2026-01-29 11:08:23.15751877 +0000 UTC m=+569.030552961" lastFinishedPulling="2026-01-29 11:08:28.538367417 +0000 UTC m=+574.411401618" observedRunningTime="2026-01-29 11:08:29.287747448 +0000 UTC m=+575.160781639" watchObservedRunningTime="2026-01-29 11:08:29.290832841 +0000 UTC m=+575.163867032" Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.309238 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-qhfhj" podStartSLOduration=1.989634353 podStartE2EDuration="7.309214047s" podCreationTimestamp="2026-01-29 11:08:22 +0000 UTC" firstStartedPulling="2026-01-29 11:08:23.237536869 +0000 UTC m=+569.110571060" lastFinishedPulling="2026-01-29 11:08:28.557116563 +0000 UTC m=+574.430150754" observedRunningTime="2026-01-29 11:08:29.308227311 +0000 UTC m=+575.181261502" watchObservedRunningTime="2026-01-29 11:08:29.309214047 +0000 UTC m=+575.182248238" Jan 29 11:08:29 crc kubenswrapper[4593]: I0129 11:08:29.338665 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" podStartSLOduration=2.071969894 podStartE2EDuration="7.338645741s" podCreationTimestamp="2026-01-29 11:08:22 +0000 UTC" firstStartedPulling="2026-01-29 11:08:23.378008489 +0000 UTC m=+569.251042680" lastFinishedPulling="2026-01-29 11:08:28.644684336 +0000 UTC m=+574.517718527" observedRunningTime="2026-01-29 11:08:29.33710283 +0000 UTC m=+575.210137021" watchObservedRunningTime="2026-01-29 11:08:29.338645741 +0000 UTC m=+575.211679952" Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.869802 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vmt7l"] Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870177 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-controller" containerID="cri-o://5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870278 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="nbdb" containerID="cri-o://0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870304 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870340 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="sbdb" containerID="cri-o://83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870380 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="northd" containerID="cri-o://e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870473 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-node" containerID="cri-o://4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.870456 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-acl-logging" containerID="cri-o://0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" gracePeriod=30 Jan 29 11:08:31 crc kubenswrapper[4593]: I0129 11:08:31.914119 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" containerID="cri-o://a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" gracePeriod=30 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.208387 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.211716 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovn-acl-logging/0.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.212146 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovn-controller/0.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.212566 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264274 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sm9pl"] Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264534 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264549 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264559 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="northd" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264567 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="northd" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264579 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-node" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264589 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-node" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264602 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="nbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264609 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="nbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264621 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264694 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264704 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264713 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264724 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="sbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264733 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="sbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264743 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264751 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264764 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kubecfg-setup" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264772 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kubecfg-setup" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264780 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264789 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264803 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-acl-logging" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264811 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-acl-logging" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.264821 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264829 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264952 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="nbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264963 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-acl-logging" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264975 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264984 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="sbdb" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.264994 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265002 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265013 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="northd" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265025 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-node" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265046 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovn-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265057 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.265165 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265175 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265278 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.265291 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerName="ovnkube-controller" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.267356 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.302102 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/2.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.303947 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/1.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.303990 4593 generic.go:334] "Generic (PLEG): container finished" podID="c76afd0b-36c6-4faa-9278-c08c60c483e9" containerID="7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2" exitCode=2 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.304050 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerDied","Data":"7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.304097 4593 scope.go:117] "RemoveContainer" containerID="ac51835cf1f007b8725bb86c71b27b6fbe4bdd808b94072ef83e847d22d1f117" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.304673 4593 scope.go:117] "RemoveContainer" containerID="7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.305192 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xpt4q_openshift-multus(c76afd0b-36c6-4faa-9278-c08c60c483e9)\"" pod="openshift-multus/multus-xpt4q" podUID="c76afd0b-36c6-4faa-9278-c08c60c483e9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.307118 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovnkube-controller/3.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.309623 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovn-acl-logging/0.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310125 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vmt7l_943b00a1-4aae-4054-b4fd-dc512fe58270/ovn-controller/0.log" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310505 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310529 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310538 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310545 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310555 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310562 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" exitCode=0 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310570 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" exitCode=143 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310577 4593 generic.go:334] "Generic (PLEG): container finished" podID="943b00a1-4aae-4054-b4fd-dc512fe58270" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" exitCode=143 Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310593 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310615 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310625 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310648 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310657 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310667 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310677 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310686 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310692 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310697 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310702 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310707 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310712 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310717 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310722 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310727 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310734 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310741 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310748 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310753 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310758 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310763 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310768 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310772 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310778 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310784 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310789 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310795 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310802 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310808 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310815 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310820 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310825 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310831 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310863 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310873 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310878 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310883 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310891 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" event={"ID":"943b00a1-4aae-4054-b4fd-dc512fe58270","Type":"ContainerDied","Data":"1f4d4677f9da87318adb658a3d5c60bf8ae9dd156ada23706892dfb2a3940ad7"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310903 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310908 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310917 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310922 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310927 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310932 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310937 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310942 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310947 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.310951 4593 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.311023 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vmt7l" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.333232 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337768 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337834 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337883 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337909 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337929 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337964 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.337988 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338018 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338047 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338080 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338101 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338145 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338202 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338293 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338320 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338348 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338370 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338398 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338424 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") pod \"943b00a1-4aae-4054-b4fd-dc512fe58270\" (UID: \"943b00a1-4aae-4054-b4fd-dc512fe58270\") " Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338577 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-etc-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338611 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-log-socket\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338654 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvp7r\" (UniqueName: \"kubernetes.io/projected/cc84611e-9a00-45a5-b761-0911d9b47bf7-kube-api-access-bvp7r\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338703 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-ovn\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338737 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-systemd-units\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338787 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-netd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-kubelet\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338833 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-bin\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338968 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.338978 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339000 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339007 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339034 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket" (OuterVolumeSpecName: "log-socket") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339040 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339067 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log" (OuterVolumeSpecName: "node-log") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339284 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339431 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339758 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339785 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339809 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339828 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339848 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash" (OuterVolumeSpecName: "host-slash") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339847 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339866 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.339886 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340107 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340147 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-config\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340166 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-systemd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340185 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovn-node-metrics-cert\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340212 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-netns\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340227 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-node-log\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340312 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340410 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-var-lib-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340440 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-env-overrides\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340755 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-script-lib\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340801 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-slash\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340892 4593 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340908 4593 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340922 4593 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340934 4593 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340947 4593 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340972 4593 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.340987 4593 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341002 4593 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341013 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341024 4593 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341034 4593 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341045 4593 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341056 4593 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341068 4593 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341082 4593 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341092 4593 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.341102 4593 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/943b00a1-4aae-4054-b4fd-dc512fe58270-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.344777 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.345994 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld" (OuterVolumeSpecName: "kube-api-access-jfpld") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "kube-api-access-jfpld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.350477 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.351404 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "943b00a1-4aae-4054-b4fd-dc512fe58270" (UID: "943b00a1-4aae-4054-b4fd-dc512fe58270"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.366304 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.381408 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.393509 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.404871 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.421680 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.440508 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441754 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-ovn\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441801 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441806 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-ovn\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441832 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-systemd-units\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441852 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-netd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441852 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-systemd-units\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441831 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441925 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-netd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.441965 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-kubelet\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442003 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-bin\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442049 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-kubelet\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442046 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442087 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-ovn-kubernetes\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-config\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442164 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-systemd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442187 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovn-node-metrics-cert\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442262 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-systemd\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442234 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-netns\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442308 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-node-log\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442335 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442359 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-node-log\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442364 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-env-overrides\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442382 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-run-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442385 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-var-lib-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442371 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-run-netns\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442407 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-script-lib\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442421 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-var-lib-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442428 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-slash\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442445 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-etc-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442466 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-log-socket\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442535 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-etc-openvswitch\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442582 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-slash\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442605 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-log-socket\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442625 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvp7r\" (UniqueName: \"kubernetes.io/projected/cc84611e-9a00-45a5-b761-0911d9b47bf7-kube-api-access-bvp7r\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443055 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/943b00a1-4aae-4054-b4fd-dc512fe58270-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443069 4593 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/943b00a1-4aae-4054-b4fd-dc512fe58270-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443078 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfpld\" (UniqueName: \"kubernetes.io/projected/943b00a1-4aae-4054-b4fd-dc512fe58270-kube-api-access-jfpld\") on node \"crc\" DevicePath \"\"" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443011 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-env-overrides\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443167 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-config\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.442148 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cc84611e-9a00-45a5-b761-0911d9b47bf7-host-cni-bin\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.443207 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovnkube-script-lib\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.446440 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cc84611e-9a00-45a5-b761-0911d9b47bf7-ovn-node-metrics-cert\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.458053 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.458623 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvp7r\" (UniqueName: \"kubernetes.io/projected/cc84611e-9a00-45a5-b761-0911d9b47bf7-kube-api-access-bvp7r\") pod \"ovnkube-node-sm9pl\" (UID: \"cc84611e-9a00-45a5-b761-0911d9b47bf7\") " pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.483321 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.509421 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.509948 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510007 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} err="failed to get container status \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510039 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.510327 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510355 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} err="failed to get container status \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510374 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.510600 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510625 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} err="failed to get container status \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.510691 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.511828 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.511867 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} err="failed to get container status \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.511894 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.512436 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.512468 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} err="failed to get container status \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.512486 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.512972 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.512991 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} err="failed to get container status \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.513006 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.513679 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.513736 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} err="failed to get container status \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.513766 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.514094 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514120 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} err="failed to get container status \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514143 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.514476 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514503 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} err="failed to get container status \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514520 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: E0129 11:08:32.514863 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514907 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} err="failed to get container status \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.514929 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515220 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} err="failed to get container status \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515243 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515488 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} err="failed to get container status \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515519 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515866 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} err="failed to get container status \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.515890 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516098 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} err="failed to get container status \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516116 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516317 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} err="failed to get container status \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516339 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516521 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} err="failed to get container status \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.516537 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.517390 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} err="failed to get container status \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.517423 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.517715 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} err="failed to get container status \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.517763 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518041 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} err="failed to get container status \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518068 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518364 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} err="failed to get container status \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518392 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518734 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} err="failed to get container status \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.518771 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519025 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} err="failed to get container status \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519050 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519334 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} err="failed to get container status \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519370 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519655 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} err="failed to get container status \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519684 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519919 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} err="failed to get container status \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.519939 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520144 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} err="failed to get container status \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520183 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520438 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} err="failed to get container status \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520460 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520758 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} err="failed to get container status \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.520776 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521015 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} err="failed to get container status \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521042 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521267 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} err="failed to get container status \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521309 4593 scope.go:117] "RemoveContainer" containerID="a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521573 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2"} err="failed to get container status \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": rpc error: code = NotFound desc = could not find container \"a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2\": container with ID starting with a146d4a03edc1f262e4519a22c3fc74c4fa72b324bfbde6e446603c86653a0f2 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521600 4593 scope.go:117] "RemoveContainer" containerID="faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521858 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27"} err="failed to get container status \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": rpc error: code = NotFound desc = could not find container \"faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27\": container with ID starting with faf9e7cbfca14304cc95f63b9069ac7fcd83bb484fcc4de400c296b66c741a27 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.521901 4593 scope.go:117] "RemoveContainer" containerID="83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522137 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da"} err="failed to get container status \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": rpc error: code = NotFound desc = could not find container \"83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da\": container with ID starting with 83563e49b64a33e7511896564053cc0cff980413c7f0e1fd0498a5e7536332da not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522181 4593 scope.go:117] "RemoveContainer" containerID="0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522440 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9"} err="failed to get container status \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": rpc error: code = NotFound desc = could not find container \"0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9\": container with ID starting with 0d8af100097b68b2696a39fcc3b6a147e9b3764963e79f41a87947f6359125a9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522463 4593 scope.go:117] "RemoveContainer" containerID="e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522719 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a"} err="failed to get container status \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": rpc error: code = NotFound desc = could not find container \"e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a\": container with ID starting with e35afe3708f2c091d94403abeca90ac2d9b93ec56788c0f521c6c36c6718052a not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522740 4593 scope.go:117] "RemoveContainer" containerID="469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.522994 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8"} err="failed to get container status \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": rpc error: code = NotFound desc = could not find container \"469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8\": container with ID starting with 469694f57bdf9a80b4fa2851da1911872a2a9566822383ccbce3ffc44fc6cdc8 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523024 4593 scope.go:117] "RemoveContainer" containerID="4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523331 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539"} err="failed to get container status \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": rpc error: code = NotFound desc = could not find container \"4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539\": container with ID starting with 4b08e98ac4b9aee754e72a3b4808e378cb69a14b23188a012fb7767a50b82539 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523355 4593 scope.go:117] "RemoveContainer" containerID="0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523660 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990"} err="failed to get container status \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": rpc error: code = NotFound desc = could not find container \"0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990\": container with ID starting with 0391d07d361bdcec8700edf6ea5826815266b50f303b5d2c62404e04f50af990 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523695 4593 scope.go:117] "RemoveContainer" containerID="5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523959 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c"} err="failed to get container status \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": rpc error: code = NotFound desc = could not find container \"5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c\": container with ID starting with 5ce8e781e41449b581222e4d05a2e3ca9f2efa8dfd78f191729500727e31d94c not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.523995 4593 scope.go:117] "RemoveContainer" containerID="a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.524261 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9"} err="failed to get container status \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": rpc error: code = NotFound desc = could not find container \"a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9\": container with ID starting with a9c855bd84a73655609f9c9071769d27bd282571b78506913066867774dcf5d9 not found: ID does not exist" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.581983 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.660090 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vmt7l"] Jan 29 11:08:32 crc kubenswrapper[4593]: I0129 11:08:32.682459 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vmt7l"] Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.085681 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="943b00a1-4aae-4054-b4fd-dc512fe58270" path="/var/lib/kubelet/pods/943b00a1-4aae-4054-b4fd-dc512fe58270/volumes" Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.318230 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/2.log" Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.320429 4593 generic.go:334] "Generic (PLEG): container finished" podID="cc84611e-9a00-45a5-b761-0911d9b47bf7" containerID="f5e3aad0c41912236686e6faf67844bb6d1c37fd275fa0c9fbe20bc6ecc870ac" exitCode=0 Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.320464 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerDied","Data":"f5e3aad0c41912236686e6faf67844bb6d1c37fd275fa0c9fbe20bc6ecc870ac"} Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.320491 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"e9d5e0c6cc806c8771b09bb971ba4bbc96484d6bad3775a48792cf313915f9b0"} Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.946529 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:08:33 crc kubenswrapper[4593]: I0129 11:08:33.946899 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328602 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"9da1c9cfa819caebcf5cfdb280d6a2bc6fe9be20c94cd1c294a21f55b262846f"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328929 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"f54f192beea231a88d26245e44e66f275a07e443f5bc6916b7349f0cbac7b999"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328939 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"a5b6b60224d98739e9c06366973644f92aad41da241024e41b74bb0d575a6fc3"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328948 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"dc3edd3f9345d17646f4fe4918cecf6778a7963b909fb243d304e748fbf03451"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328956 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"51082c1388d07f2cb08f551e99213d987d08b24bc6e484e9810db2912ad174cd"} Jan 29 11:08:34 crc kubenswrapper[4593]: I0129 11:08:34.328964 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"6a01ae98ea5a6d2abb2c27f744261bd5225d22b0977678fe0a4b97d6db62b63a"} Jan 29 11:08:36 crc kubenswrapper[4593]: I0129 11:08:36.352096 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"1242e3161aaf4e2337474d78cb73c623d0a9f71c9c91b7f1425ff3c57ecebdaa"} Jan 29 11:08:37 crc kubenswrapper[4593]: I0129 11:08:37.860716 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-t7s4r" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.411991 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" event={"ID":"cc84611e-9a00-45a5-b761-0911d9b47bf7","Type":"ContainerStarted","Data":"ea2e3c096d1ef81526f242930773ddc338cdda6d2069f1da109ac54e38291144"} Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.413752 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.413810 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.413850 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.444030 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.452348 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:08:39 crc kubenswrapper[4593]: I0129 11:08:39.480790 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" podStartSLOduration=7.480774496 podStartE2EDuration="7.480774496s" podCreationTimestamp="2026-01-29 11:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:08:39.446522911 +0000 UTC m=+585.319557092" watchObservedRunningTime="2026-01-29 11:08:39.480774496 +0000 UTC m=+585.353808687" Jan 29 11:08:45 crc kubenswrapper[4593]: I0129 11:08:45.082874 4593 scope.go:117] "RemoveContainer" containerID="7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2" Jan 29 11:08:45 crc kubenswrapper[4593]: E0129 11:08:45.084354 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-xpt4q_openshift-multus(c76afd0b-36c6-4faa-9278-c08c60c483e9)\"" pod="openshift-multus/multus-xpt4q" podUID="c76afd0b-36c6-4faa-9278-c08c60c483e9" Jan 29 11:09:00 crc kubenswrapper[4593]: I0129 11:09:00.074721 4593 scope.go:117] "RemoveContainer" containerID="7088fbdf7ae2d9a3c27696c6ec34c0f98abb36e3618af2948ac923c1d6097be2" Jan 29 11:09:00 crc kubenswrapper[4593]: I0129 11:09:00.373661 4593 scope.go:117] "RemoveContainer" containerID="56d5157444e050b6f16a3cd3db852cdaa6435ef728d9605dbdd7a7adb3a64e51" Jan 29 11:09:00 crc kubenswrapper[4593]: I0129 11:09:00.560757 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xpt4q_c76afd0b-36c6-4faa-9278-c08c60c483e9/kube-multus/2.log" Jan 29 11:09:00 crc kubenswrapper[4593]: I0129 11:09:00.560825 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xpt4q" event={"ID":"c76afd0b-36c6-4faa-9278-c08c60c483e9","Type":"ContainerStarted","Data":"7ce67f1a579e52aa9e2e4d4f4f4d42ee734442d1f408d335f8fbb4182b8ca8ba"} Jan 29 11:09:02 crc kubenswrapper[4593]: I0129 11:09:02.660462 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sm9pl" Jan 29 11:09:03 crc kubenswrapper[4593]: I0129 11:09:03.946464 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:09:03 crc kubenswrapper[4593]: I0129 11:09:03.947152 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.399656 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w"] Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.401140 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.404074 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.425708 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w"] Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.512350 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.512666 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.512822 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.613930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.614014 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.614085 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.614870 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.615134 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.638975 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.719051 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:21 crc kubenswrapper[4593]: I0129 11:09:21.914572 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w"] Jan 29 11:09:22 crc kubenswrapper[4593]: I0129 11:09:22.679865 4593 generic.go:334] "Generic (PLEG): container finished" podID="b514f100-8029-41bf-9315-9e8c18a7238a" containerID="78c3759864d05d7d19be3b0d83ed871900e54c8183aab376b46a43c128e076f2" exitCode=0 Jan 29 11:09:22 crc kubenswrapper[4593]: I0129 11:09:22.681029 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerDied","Data":"78c3759864d05d7d19be3b0d83ed871900e54c8183aab376b46a43c128e076f2"} Jan 29 11:09:22 crc kubenswrapper[4593]: I0129 11:09:22.681138 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerStarted","Data":"359e2a1cd8d457cda64b56ce97afa8c8155194f23f4dad2b817bd5760fa136f3"} Jan 29 11:09:24 crc kubenswrapper[4593]: I0129 11:09:24.693145 4593 generic.go:334] "Generic (PLEG): container finished" podID="b514f100-8029-41bf-9315-9e8c18a7238a" containerID="f480d4bff3158dd2da88ac217ce006fa0885868606782266869d93440be1913a" exitCode=0 Jan 29 11:09:24 crc kubenswrapper[4593]: I0129 11:09:24.693195 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerDied","Data":"f480d4bff3158dd2da88ac217ce006fa0885868606782266869d93440be1913a"} Jan 29 11:09:25 crc kubenswrapper[4593]: I0129 11:09:25.701526 4593 generic.go:334] "Generic (PLEG): container finished" podID="b514f100-8029-41bf-9315-9e8c18a7238a" containerID="849838256ca3a590bbf121bdb5fd48f8450f87eb5499fb4dcc356b159271a2c8" exitCode=0 Jan 29 11:09:25 crc kubenswrapper[4593]: I0129 11:09:25.701623 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerDied","Data":"849838256ca3a590bbf121bdb5fd48f8450f87eb5499fb4dcc356b159271a2c8"} Jan 29 11:09:26 crc kubenswrapper[4593]: I0129 11:09:26.930115 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.083199 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") pod \"b514f100-8029-41bf-9315-9e8c18a7238a\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.083661 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") pod \"b514f100-8029-41bf-9315-9e8c18a7238a\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.083706 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") pod \"b514f100-8029-41bf-9315-9e8c18a7238a\" (UID: \"b514f100-8029-41bf-9315-9e8c18a7238a\") " Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.084333 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle" (OuterVolumeSpecName: "bundle") pod "b514f100-8029-41bf-9315-9e8c18a7238a" (UID: "b514f100-8029-41bf-9315-9e8c18a7238a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.090830 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m" (OuterVolumeSpecName: "kube-api-access-dks2m") pod "b514f100-8029-41bf-9315-9e8c18a7238a" (UID: "b514f100-8029-41bf-9315-9e8c18a7238a"). InnerVolumeSpecName "kube-api-access-dks2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.114078 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util" (OuterVolumeSpecName: "util") pod "b514f100-8029-41bf-9315-9e8c18a7238a" (UID: "b514f100-8029-41bf-9315-9e8c18a7238a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.185219 4593 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.185290 4593 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b514f100-8029-41bf-9315-9e8c18a7238a-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.185316 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dks2m\" (UniqueName: \"kubernetes.io/projected/b514f100-8029-41bf-9315-9e8c18a7238a-kube-api-access-dks2m\") on node \"crc\" DevicePath \"\"" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.723143 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" event={"ID":"b514f100-8029-41bf-9315-9e8c18a7238a","Type":"ContainerDied","Data":"359e2a1cd8d457cda64b56ce97afa8c8155194f23f4dad2b817bd5760fa136f3"} Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.723191 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="359e2a1cd8d457cda64b56ce97afa8c8155194f23f4dad2b817bd5760fa136f3" Jan 29 11:09:27 crc kubenswrapper[4593]: I0129 11:09:27.723262 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.081934 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xmhmc"] Jan 29 11:09:29 crc kubenswrapper[4593]: E0129 11:09:29.082146 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="extract" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082161 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="extract" Jan 29 11:09:29 crc kubenswrapper[4593]: E0129 11:09:29.082207 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="pull" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082216 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="pull" Jan 29 11:09:29 crc kubenswrapper[4593]: E0129 11:09:29.082233 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="util" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082240 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="util" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082361 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b514f100-8029-41bf-9315-9e8c18a7238a" containerName="extract" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.082872 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.084798 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-q8kdv" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.085176 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.086142 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.100780 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xmhmc"] Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.218872 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhb6\" (UniqueName: \"kubernetes.io/projected/b2e0c4ff-8a2b-474d-8414-a0026d61b11e-kube-api-access-gnhb6\") pod \"nmstate-operator-646758c888-xmhmc\" (UID: \"b2e0c4ff-8a2b-474d-8414-a0026d61b11e\") " pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.320417 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhb6\" (UniqueName: \"kubernetes.io/projected/b2e0c4ff-8a2b-474d-8414-a0026d61b11e-kube-api-access-gnhb6\") pod \"nmstate-operator-646758c888-xmhmc\" (UID: \"b2e0c4ff-8a2b-474d-8414-a0026d61b11e\") " pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.340580 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhb6\" (UniqueName: \"kubernetes.io/projected/b2e0c4ff-8a2b-474d-8414-a0026d61b11e-kube-api-access-gnhb6\") pod \"nmstate-operator-646758c888-xmhmc\" (UID: \"b2e0c4ff-8a2b-474d-8414-a0026d61b11e\") " pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.415777 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" Jan 29 11:09:29 crc kubenswrapper[4593]: I0129 11:09:29.805431 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-xmhmc"] Jan 29 11:09:30 crc kubenswrapper[4593]: I0129 11:09:30.749848 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" event={"ID":"b2e0c4ff-8a2b-474d-8414-a0026d61b11e","Type":"ContainerStarted","Data":"8a90ec6bf0ce834b124e82cbdf4240d6d6ecbbea28bf5beecbf453e216277260"} Jan 29 11:09:32 crc kubenswrapper[4593]: I0129 11:09:32.762319 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" event={"ID":"b2e0c4ff-8a2b-474d-8414-a0026d61b11e","Type":"ContainerStarted","Data":"82b6af78fede5e003fb41379fe5c96489cc9d4eb683404d4585a103f844a7dbf"} Jan 29 11:09:32 crc kubenswrapper[4593]: I0129 11:09:32.784521 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-xmhmc" podStartSLOduration=1.765230715 podStartE2EDuration="3.784505064s" podCreationTimestamp="2026-01-29 11:09:29 +0000 UTC" firstStartedPulling="2026-01-29 11:09:29.822443203 +0000 UTC m=+635.695477404" lastFinishedPulling="2026-01-29 11:09:31.841717562 +0000 UTC m=+637.714751753" observedRunningTime="2026-01-29 11:09:32.780367843 +0000 UTC m=+638.653402034" watchObservedRunningTime="2026-01-29 11:09:32.784505064 +0000 UTC m=+638.657539255" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.760362 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-q2995"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.761999 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.764810 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mffj6" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.781658 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-q2995"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.811549 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.824711 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.837425 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.892244 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.909705 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q2lbc"] Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.909910 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnxw\" (UniqueName: \"kubernetes.io/projected/72d4f068-dc20-44d0-aca6-c8f0992536e6-kube-api-access-2lnxw\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.909960 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.910005 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n25d\" (UniqueName: \"kubernetes.io/projected/7a32568f-244c-432b-8186-683e8bc10371-kube-api-access-4n25d\") pod \"nmstate-metrics-54757c584b-q2995\" (UID: \"7a32568f-244c-432b-8186-683e8bc10371\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.910534 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.946460 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.946516 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.946598 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.952781 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:09:33 crc kubenswrapper[4593]: I0129 11:09:33.952872 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d" gracePeriod=600 Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011217 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-nmstate-lock\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011287 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-dbus-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011312 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-ovs-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011333 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbzkh\" (UniqueName: \"kubernetes.io/projected/ea391d24-e32c-440b-b5c2-218920192125-kube-api-access-dbzkh\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n25d\" (UniqueName: \"kubernetes.io/projected/7a32568f-244c-432b-8186-683e8bc10371-kube-api-access-4n25d\") pod \"nmstate-metrics-54757c584b-q2995\" (UID: \"7a32568f-244c-432b-8186-683e8bc10371\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011427 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lnxw\" (UniqueName: \"kubernetes.io/projected/72d4f068-dc20-44d0-aca6-c8f0992536e6-kube-api-access-2lnxw\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.011461 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: E0129 11:09:34.011567 4593 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 11:09:34 crc kubenswrapper[4593]: E0129 11:09:34.011623 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair podName:72d4f068-dc20-44d0-aca6-c8f0992536e6 nodeName:}" failed. No retries permitted until 2026-01-29 11:09:34.511602436 +0000 UTC m=+640.384636657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-47n46" (UID: "72d4f068-dc20-44d0-aca6-c8f0992536e6") : secret "openshift-nmstate-webhook" not found Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.049564 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lnxw\" (UniqueName: \"kubernetes.io/projected/72d4f068-dc20-44d0-aca6-c8f0992536e6-kube-api-access-2lnxw\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.050389 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n25d\" (UniqueName: \"kubernetes.io/projected/7a32568f-244c-432b-8186-683e8bc10371-kube-api-access-4n25d\") pod \"nmstate-metrics-54757c584b-q2995\" (UID: \"7a32568f-244c-432b-8186-683e8bc10371\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.083042 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.087227 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.087974 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.091871 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-cfmdq" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.091945 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.091871 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.115001 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-ovs-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116416 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbzkh\" (UniqueName: \"kubernetes.io/projected/ea391d24-e32c-440b-b5c2-218920192125-kube-api-access-dbzkh\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116468 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-ovs-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116530 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116562 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116677 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-nmstate-lock\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116754 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvl9n\" (UniqueName: \"kubernetes.io/projected/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-kube-api-access-zvl9n\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.116778 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-dbus-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.118551 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-dbus-socket\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.118812 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ea391d24-e32c-440b-b5c2-218920192125-nmstate-lock\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.155846 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbzkh\" (UniqueName: \"kubernetes.io/projected/ea391d24-e32c-440b-b5c2-218920192125-kube-api-access-dbzkh\") pod \"nmstate-handler-q2lbc\" (UID: \"ea391d24-e32c-440b-b5c2-218920192125\") " pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.218009 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.218131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.218208 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvl9n\" (UniqueName: \"kubernetes.io/projected/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-kube-api-access-zvl9n\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: E0129 11:09:34.218795 4593 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 11:09:34 crc kubenswrapper[4593]: E0129 11:09:34.218860 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert podName:2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2 nodeName:}" failed. No retries permitted until 2026-01-29 11:09:34.718844956 +0000 UTC m=+640.591879147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-nck62" (UID: "2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2") : secret "plugin-serving-cert" not found Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.219569 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.233048 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.241075 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvl9n\" (UniqueName: \"kubernetes.io/projected/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-kube-api-access-zvl9n\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.343548 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fdf6c7869-trqgk"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.344531 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.380656 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fdf6c7869-trqgk"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428705 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-trusted-ca-bundle\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428742 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-oauth-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428765 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428782 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-oauth-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428801 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428848 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8q2r\" (UniqueName: \"kubernetes.io/projected/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-kube-api-access-t8q2r\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.428882 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-service-ca\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529597 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8q2r\" (UniqueName: \"kubernetes.io/projected/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-kube-api-access-t8q2r\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529682 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-service-ca\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529733 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-trusted-ca-bundle\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529750 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-oauth-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529769 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529783 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-oauth-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.529824 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.531430 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-oauth-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.531484 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-service-ca\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.531892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.532367 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-trusted-ca-bundle\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.535503 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/72d4f068-dc20-44d0-aca6-c8f0992536e6-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-47n46\" (UID: \"72d4f068-dc20-44d0-aca6-c8f0992536e6\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.537329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-serving-cert\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.540331 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-console-oauth-config\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.551053 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8q2r\" (UniqueName: \"kubernetes.io/projected/c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce-kube-api-access-t8q2r\") pod \"console-fdf6c7869-trqgk\" (UID: \"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce\") " pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.689233 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-q2995"] Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.689471 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.732213 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.740030 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-nck62\" (UID: \"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.769521 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.804540 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" event={"ID":"7a32568f-244c-432b-8186-683e8bc10371","Type":"ContainerStarted","Data":"2738cebdbe181dd7e7a77d4d417aa44ce887ceeebde33b3991e01e517f9d3c58"} Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.807351 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q2lbc" event={"ID":"ea391d24-e32c-440b-b5c2-218920192125","Type":"ContainerStarted","Data":"638e9f8ebc583f0f80f1aee775823876d32225024c79ce43ade20b63e5339ee5"} Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.808554 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.827564 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d" exitCode=0 Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.827610 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d"} Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.827658 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2"} Jan 29 11:09:34 crc kubenswrapper[4593]: I0129 11:09:34.827701 4593 scope.go:117] "RemoveContainer" containerID="8b86c4fe063da798a93b66c4ff5d4efee81766c3e10d5ae883a58f37ce9f5d50" Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.030787 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46"] Jan 29 11:09:35 crc kubenswrapper[4593]: W0129 11:09:35.047484 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d4f068_dc20_44d0_aca6_c8f0992536e6.slice/crio-890e420c17612155f6d31b57931b665f61bf8fc947fe40113094f1dc6e5745e9 WatchSource:0}: Error finding container 890e420c17612155f6d31b57931b665f61bf8fc947fe40113094f1dc6e5745e9: Status 404 returned error can't find the container with id 890e420c17612155f6d31b57931b665f61bf8fc947fe40113094f1dc6e5745e9 Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.136975 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fdf6c7869-trqgk"] Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.282464 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62"] Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.840388 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" event={"ID":"72d4f068-dc20-44d0-aca6-c8f0992536e6","Type":"ContainerStarted","Data":"890e420c17612155f6d31b57931b665f61bf8fc947fe40113094f1dc6e5745e9"} Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.843215 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fdf6c7869-trqgk" event={"ID":"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce","Type":"ContainerStarted","Data":"b98249628a8681273dfbe20c075f500ca935590bea8450af0bb76b2ae943a69b"} Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.843250 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fdf6c7869-trqgk" event={"ID":"c9ef55ac-f08b-4f72-a96c-ca6ddb3786ce","Type":"ContainerStarted","Data":"a366ca9ac1937b5b282f224c4d5e7b88852693512ece90a53076c9e3d367d71b"} Jan 29 11:09:35 crc kubenswrapper[4593]: I0129 11:09:35.844976 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" event={"ID":"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2","Type":"ContainerStarted","Data":"a07a0f2f3cf331172fb02c16c3b93e4ec6354f121700102be5ce3afc89a5c670"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.863170 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" event={"ID":"7a32568f-244c-432b-8186-683e8bc10371","Type":"ContainerStarted","Data":"ef1c9f7f74d586c20da595eba1cc80f73454d87184fdde928e71e187a675253a"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.865896 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q2lbc" event={"ID":"ea391d24-e32c-440b-b5c2-218920192125","Type":"ContainerStarted","Data":"d479d04d33245f40c4d8407da6fee37ccccbf786201e9a41f1574e43ce762d71"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.866076 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.869145 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" event={"ID":"72d4f068-dc20-44d0-aca6-c8f0992536e6","Type":"ContainerStarted","Data":"f2da04d4ea05914c5736faf7c64b996c8715bc2e3f0ae3f19a2b3b24fe89b9b6"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.870083 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.874884 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" event={"ID":"2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2","Type":"ContainerStarted","Data":"5778f62d4ff3a173a41a681e0dcab626cd20931cea220413f9fe2b0952b54566"} Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.888626 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fdf6c7869-trqgk" podStartSLOduration=4.888606329 podStartE2EDuration="4.888606329s" podCreationTimestamp="2026-01-29 11:09:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:09:35.866107672 +0000 UTC m=+641.739141883" watchObservedRunningTime="2026-01-29 11:09:38.888606329 +0000 UTC m=+644.761640530" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.889339 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q2lbc" podStartSLOduration=2.108636813 podStartE2EDuration="5.889333908s" podCreationTimestamp="2026-01-29 11:09:33 +0000 UTC" firstStartedPulling="2026-01-29 11:09:34.259140734 +0000 UTC m=+640.132174925" lastFinishedPulling="2026-01-29 11:09:38.039837829 +0000 UTC m=+643.912872020" observedRunningTime="2026-01-29 11:09:38.888848696 +0000 UTC m=+644.761882897" watchObservedRunningTime="2026-01-29 11:09:38.889333908 +0000 UTC m=+644.762368109" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.917409 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" podStartSLOduration=2.928762567 podStartE2EDuration="5.917384908s" podCreationTimestamp="2026-01-29 11:09:33 +0000 UTC" firstStartedPulling="2026-01-29 11:09:35.069157228 +0000 UTC m=+640.942191419" lastFinishedPulling="2026-01-29 11:09:38.057779569 +0000 UTC m=+643.930813760" observedRunningTime="2026-01-29 11:09:38.907197606 +0000 UTC m=+644.780231807" watchObservedRunningTime="2026-01-29 11:09:38.917384908 +0000 UTC m=+644.790419109" Jan 29 11:09:38 crc kubenswrapper[4593]: I0129 11:09:38.936770 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-nck62" podStartSLOduration=2.191935602 podStartE2EDuration="4.936747986s" podCreationTimestamp="2026-01-29 11:09:34 +0000 UTC" firstStartedPulling="2026-01-29 11:09:35.294212894 +0000 UTC m=+641.167247085" lastFinishedPulling="2026-01-29 11:09:38.039025278 +0000 UTC m=+643.912059469" observedRunningTime="2026-01-29 11:09:38.932260316 +0000 UTC m=+644.805294507" watchObservedRunningTime="2026-01-29 11:09:38.936747986 +0000 UTC m=+644.809782177" Jan 29 11:09:40 crc kubenswrapper[4593]: I0129 11:09:40.909271 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" event={"ID":"7a32568f-244c-432b-8186-683e8bc10371","Type":"ContainerStarted","Data":"0c4c940f37c68347cf0f5c8998f22fb55b3baf40d61156dc7955df52023fff26"} Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.256522 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q2lbc" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.277423 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-q2995" podStartSLOduration=5.290025509 podStartE2EDuration="11.277403643s" podCreationTimestamp="2026-01-29 11:09:33 +0000 UTC" firstStartedPulling="2026-01-29 11:09:34.713368717 +0000 UTC m=+640.586402908" lastFinishedPulling="2026-01-29 11:09:40.700746841 +0000 UTC m=+646.573781042" observedRunningTime="2026-01-29 11:09:40.933044351 +0000 UTC m=+646.806078552" watchObservedRunningTime="2026-01-29 11:09:44.277403643 +0000 UTC m=+650.150437844" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.689620 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.689737 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.694750 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.938076 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fdf6c7869-trqgk" Jan 29 11:09:44 crc kubenswrapper[4593]: I0129 11:09:44.993653 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:09:54 crc kubenswrapper[4593]: I0129 11:09:54.781183 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.654927 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz"] Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.656944 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.663324 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.664125 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz"] Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.682403 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.682503 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.682539 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.783616 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.783673 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.783723 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.784143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.784157 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.810251 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:06 crc kubenswrapper[4593]: I0129 11:10:06.975565 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:07 crc kubenswrapper[4593]: I0129 11:10:07.276354 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz"] Jan 29 11:10:08 crc kubenswrapper[4593]: I0129 11:10:08.094685 4593 generic.go:334] "Generic (PLEG): container finished" podID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerID="2f9e8302f58d43495da3546dd373f31c2ec8f1080059c2177b2216fe37d06827" exitCode=0 Jan 29 11:10:08 crc kubenswrapper[4593]: I0129 11:10:08.095048 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerDied","Data":"2f9e8302f58d43495da3546dd373f31c2ec8f1080059c2177b2216fe37d06827"} Jan 29 11:10:08 crc kubenswrapper[4593]: I0129 11:10:08.095091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerStarted","Data":"073890ae1de6de6485004546b26f86a67ff11f6fb88351c22cfe65b1c90a225d"} Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.052822 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-8425v" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" containerID="cri-o://479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" gracePeriod=15 Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.111718 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerDied","Data":"dcdc4a58e23cff241a1ebc2410e2e100599d977a3ac38f3d95dd13179d23922f"} Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.111466 4593 generic.go:334] "Generic (PLEG): container finished" podID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerID="dcdc4a58e23cff241a1ebc2410e2e100599d977a3ac38f3d95dd13179d23922f" exitCode=0 Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.512699 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8425v_ccb12507-4eef-467d-885d-982c68807bda/console/0.log" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.512955 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.635905 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.635956 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.635984 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636039 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636095 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636132 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636157 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") pod \"ccb12507-4eef-467d-885d-982c68807bda\" (UID: \"ccb12507-4eef-467d-885d-982c68807bda\") " Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636829 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca" (OuterVolumeSpecName: "service-ca") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636841 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636851 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.636985 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config" (OuterVolumeSpecName: "console-config") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.640923 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.640961 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh" (OuterVolumeSpecName: "kube-api-access-57zkh") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "kube-api-access-57zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.648275 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ccb12507-4eef-467d-885d-982c68807bda" (UID: "ccb12507-4eef-467d-885d-982c68807bda"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737793 4593 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737841 4593 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737854 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57zkh\" (UniqueName: \"kubernetes.io/projected/ccb12507-4eef-467d-885d-982c68807bda-kube-api-access-57zkh\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737868 4593 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737879 4593 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737889 4593 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ccb12507-4eef-467d-885d-982c68807bda-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:10 crc kubenswrapper[4593]: I0129 11:10:10.737899 4593 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ccb12507-4eef-467d-885d-982c68807bda-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.124548 4593 generic.go:334] "Generic (PLEG): container finished" podID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerID="f1e0660cfa2f6090117b5c5883f25509dd5a8fa838ee86718510846b105608ae" exitCode=0 Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.124653 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerDied","Data":"f1e0660cfa2f6090117b5c5883f25509dd5a8fa838ee86718510846b105608ae"} Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128228 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-8425v_ccb12507-4eef-467d-885d-982c68807bda/console/0.log" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128281 4593 generic.go:334] "Generic (PLEG): container finished" podID="ccb12507-4eef-467d-885d-982c68807bda" containerID="479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" exitCode=2 Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128309 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8425v" event={"ID":"ccb12507-4eef-467d-885d-982c68807bda","Type":"ContainerDied","Data":"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f"} Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128336 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-8425v" event={"ID":"ccb12507-4eef-467d-885d-982c68807bda","Type":"ContainerDied","Data":"b2d3338b1514b5c7e9256324d64b1f803fa4ccbc8cc1a14cc26386a3d7708bb8"} Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128355 4593 scope.go:117] "RemoveContainer" containerID="479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.128388 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-8425v" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.150145 4593 scope.go:117] "RemoveContainer" containerID="479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" Jan 29 11:10:11 crc kubenswrapper[4593]: E0129 11:10:11.151120 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f\": container with ID starting with 479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f not found: ID does not exist" containerID="479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.151167 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f"} err="failed to get container status \"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f\": rpc error: code = NotFound desc = could not find container \"479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f\": container with ID starting with 479ab71a20268cace33237c302625fff890b4d521372542cf861c6e0b4faad5f not found: ID does not exist" Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.159833 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:10:11 crc kubenswrapper[4593]: I0129 11:10:11.166045 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-8425v"] Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.348874 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.461592 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") pod \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.461773 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") pod \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.461907 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") pod \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\" (UID: \"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11\") " Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.463367 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle" (OuterVolumeSpecName: "bundle") pod "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" (UID: "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.477680 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc" (OuterVolumeSpecName: "kube-api-access-p4kmc") pod "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" (UID: "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11"). InnerVolumeSpecName "kube-api-access-p4kmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.483375 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util" (OuterVolumeSpecName: "util") pod "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" (UID: "ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.563450 4593 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.563480 4593 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:12 crc kubenswrapper[4593]: I0129 11:10:12.563489 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4kmc\" (UniqueName: \"kubernetes.io/projected/ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11-kube-api-access-p4kmc\") on node \"crc\" DevicePath \"\"" Jan 29 11:10:13 crc kubenswrapper[4593]: I0129 11:10:13.083070 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb12507-4eef-467d-885d-982c68807bda" path="/var/lib/kubelet/pods/ccb12507-4eef-467d-885d-982c68807bda/volumes" Jan 29 11:10:13 crc kubenswrapper[4593]: I0129 11:10:13.143330 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" event={"ID":"ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11","Type":"ContainerDied","Data":"073890ae1de6de6485004546b26f86a67ff11f6fb88351c22cfe65b1c90a225d"} Jan 29 11:10:13 crc kubenswrapper[4593]: I0129 11:10:13.143373 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073890ae1de6de6485004546b26f86a67ff11f6fb88351c22cfe65b1c90a225d" Jan 29 11:10:13 crc kubenswrapper[4593]: I0129 11:10:13.143383 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.502164 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk"] Jan 29 11:10:22 crc kubenswrapper[4593]: E0129 11:10:22.503512 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="extract" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.503528 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="extract" Jan 29 11:10:22 crc kubenswrapper[4593]: E0129 11:10:22.503548 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.503556 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" Jan 29 11:10:22 crc kubenswrapper[4593]: E0129 11:10:22.503585 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="pull" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.503593 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="pull" Jan 29 11:10:22 crc kubenswrapper[4593]: E0129 11:10:22.503607 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="util" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.503614 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="util" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.504379 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11" containerName="extract" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.504410 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb12507-4eef-467d-885d-982c68807bda" containerName="console" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.505472 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.508942 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.509342 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.510517 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.510782 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.510988 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gl72r" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.535951 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk"] Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.687343 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmh2k\" (UniqueName: \"kubernetes.io/projected/421156e9-d8d3-4112-bd58-d09c40a70a12-kube-api-access-vmh2k\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.687761 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-apiservice-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.687831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-webhook-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.789153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmh2k\" (UniqueName: \"kubernetes.io/projected/421156e9-d8d3-4112-bd58-d09c40a70a12-kube-api-access-vmh2k\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.789498 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-apiservice-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.789688 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-webhook-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.796846 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-webhook-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.808401 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/421156e9-d8d3-4112-bd58-d09c40a70a12-apiservice-cert\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.831029 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmh2k\" (UniqueName: \"kubernetes.io/projected/421156e9-d8d3-4112-bd58-d09c40a70a12-kube-api-access-vmh2k\") pod \"metallb-operator-controller-manager-5bf4d9f4bd-ll9bk\" (UID: \"421156e9-d8d3-4112-bd58-d09c40a70a12\") " pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.838851 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4"] Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.839832 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.843191 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.843736 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.844000 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5nljv" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.853946 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4"] Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.992330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.992382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-webhook-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:22 crc kubenswrapper[4593]: I0129 11:10:22.992451 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlh76\" (UniqueName: \"kubernetes.io/projected/c3381187-83f6-4877-8d72-3ed30f360a86-kube-api-access-hlh76\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.093106 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlh76\" (UniqueName: \"kubernetes.io/projected/c3381187-83f6-4877-8d72-3ed30f360a86-kube-api-access-hlh76\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.093162 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.093195 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-webhook-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.096778 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-webhook-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.108252 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3381187-83f6-4877-8d72-3ed30f360a86-apiservice-cert\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.121134 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.154999 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlh76\" (UniqueName: \"kubernetes.io/projected/c3381187-83f6-4877-8d72-3ed30f360a86-kube-api-access-hlh76\") pod \"metallb-operator-webhook-server-7fdc78c47c-w2tv4\" (UID: \"c3381187-83f6-4877-8d72-3ed30f360a86\") " pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.186769 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.587109 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4"] Jan 29 11:10:23 crc kubenswrapper[4593]: I0129 11:10:23.708588 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk"] Jan 29 11:10:24 crc kubenswrapper[4593]: I0129 11:10:24.208102 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" event={"ID":"421156e9-d8d3-4112-bd58-d09c40a70a12","Type":"ContainerStarted","Data":"de8d47ca6715760c776d46fe1e47f8c9ba0ffa5f00135b86c26bccffbd4ebc85"} Jan 29 11:10:24 crc kubenswrapper[4593]: I0129 11:10:24.210116 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" event={"ID":"c3381187-83f6-4877-8d72-3ed30f360a86","Type":"ContainerStarted","Data":"561adee80387774a85d164bd590a76efa44ea14f07e093f3d278546b2b2f389b"} Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.253227 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" event={"ID":"c3381187-83f6-4877-8d72-3ed30f360a86","Type":"ContainerStarted","Data":"da847d1ec79e66e150dac98a643a705701e8adbd485dba899b5f5eb68d3b68f1"} Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.253783 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.254674 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" event={"ID":"421156e9-d8d3-4112-bd58-d09c40a70a12","Type":"ContainerStarted","Data":"6478b453cfe7642626d97fd9fc7023a2fd10c542d2e3f8ed40bffc629a6d68aa"} Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.254876 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.275426 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" podStartSLOduration=2.273844272 podStartE2EDuration="8.275409067s" podCreationTimestamp="2026-01-29 11:10:22 +0000 UTC" firstStartedPulling="2026-01-29 11:10:23.598141914 +0000 UTC m=+689.471176105" lastFinishedPulling="2026-01-29 11:10:29.599706709 +0000 UTC m=+695.472740900" observedRunningTime="2026-01-29 11:10:30.275397277 +0000 UTC m=+696.148431468" watchObservedRunningTime="2026-01-29 11:10:30.275409067 +0000 UTC m=+696.148443258" Jan 29 11:10:30 crc kubenswrapper[4593]: I0129 11:10:30.294618 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" podStartSLOduration=2.4254703060000002 podStartE2EDuration="8.294600331s" podCreationTimestamp="2026-01-29 11:10:22 +0000 UTC" firstStartedPulling="2026-01-29 11:10:23.713357312 +0000 UTC m=+689.586391503" lastFinishedPulling="2026-01-29 11:10:29.582487337 +0000 UTC m=+695.455521528" observedRunningTime="2026-01-29 11:10:30.294026756 +0000 UTC m=+696.167060967" watchObservedRunningTime="2026-01-29 11:10:30.294600331 +0000 UTC m=+696.167634522" Jan 29 11:10:43 crc kubenswrapper[4593]: I0129 11:10:43.192419 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7fdc78c47c-w2tv4" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.124533 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5bf4d9f4bd-ll9bk" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.822773 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.823508 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.827210 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-54s6j"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.830110 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.834085 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.834289 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.834420 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-tqjk4" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.843507 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.847891 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907745 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-reloader\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907797 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4d2v\" (UniqueName: \"kubernetes.io/projected/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-kube-api-access-m4d2v\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907843 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbbpl\" (UniqueName: \"kubernetes.io/projected/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-kube-api-access-zbbpl\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907970 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-conf\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.907999 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.908030 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.908051 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.908105 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-sockets\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.908170 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-startup\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.927945 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-m77zw"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.928846 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m77zw" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.931455 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.932254 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.932426 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.932616 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lhb8v" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.947718 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-hvqbg"] Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.948586 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.954432 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 11:11:03 crc kubenswrapper[4593]: I0129 11:11:03.980005 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hvqbg"] Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.008988 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-startup\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-reloader\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009096 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4d2v\" (UniqueName: \"kubernetes.io/projected/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-kube-api-access-m4d2v\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009118 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbbpl\" (UniqueName: \"kubernetes.io/projected/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-kube-api-access-zbbpl\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-conf\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009181 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009219 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009239 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.009260 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-sockets\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.011548 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.011793 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-conf\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.011853 4593 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.011891 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs podName:9eb36e6e-e554-4b1a-9750-cd81c4c8d985 nodeName:}" failed. No retries permitted until 2026-01-29 11:11:04.511876251 +0000 UTC m=+730.384910442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs") pod "frr-k8s-54s6j" (UID: "9eb36e6e-e554-4b1a-9750-cd81c4c8d985") : secret "frr-k8s-certs-secret" not found Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.012293 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-reloader\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.012497 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-startup\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.012877 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-frr-sockets\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.036227 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.039839 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbbpl\" (UniqueName: \"kubernetes.io/projected/45d808cf-80c4-4f7b-a172-76e4ecd9e37b-kube-api-access-zbbpl\") pod \"frr-k8s-webhook-server-7df86c4f6c-dj42h\" (UID: \"45d808cf-80c4-4f7b-a172-76e4ecd9e37b\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.053951 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4d2v\" (UniqueName: \"kubernetes.io/projected/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-kube-api-access-m4d2v\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111343 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4z4s\" (UniqueName: \"kubernetes.io/projected/37969e5d-3111-45cc-a711-da443a473c52-kube-api-access-d4z4s\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111416 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-metrics-certs\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111440 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-cert\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111454 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111488 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/37969e5d-3111-45cc-a711-da443a473c52-metallb-excludel2\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111509 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjvz\" (UniqueName: \"kubernetes.io/projected/3462ad7c-24f3-4c73-990d-a0f471d08d1d-kube-api-access-ksjvz\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.111526 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.145528 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212182 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4z4s\" (UniqueName: \"kubernetes.io/projected/37969e5d-3111-45cc-a711-da443a473c52-kube-api-access-d4z4s\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212300 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-metrics-certs\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212345 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-cert\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212362 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212408 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/37969e5d-3111-45cc-a711-da443a473c52-metallb-excludel2\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212451 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjvz\" (UniqueName: \"kubernetes.io/projected/3462ad7c-24f3-4c73-990d-a0f471d08d1d-kube-api-access-ksjvz\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.212483 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.213796 4593 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.213839 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist podName:37969e5d-3111-45cc-a711-da443a473c52 nodeName:}" failed. No retries permitted until 2026-01-29 11:11:04.713826714 +0000 UTC m=+730.586860905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist") pod "speaker-m77zw" (UID: "37969e5d-3111-45cc-a711-da443a473c52") : secret "metallb-memberlist" not found Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.213890 4593 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 11:11:04 crc kubenswrapper[4593]: E0129 11:11:04.213920 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs podName:37969e5d-3111-45cc-a711-da443a473c52 nodeName:}" failed. No retries permitted until 2026-01-29 11:11:04.713911777 +0000 UTC m=+730.586945968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs") pod "speaker-m77zw" (UID: "37969e5d-3111-45cc-a711-da443a473c52") : secret "speaker-certs-secret" not found Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.213918 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/37969e5d-3111-45cc-a711-da443a473c52-metallb-excludel2\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.216998 4593 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.220253 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-metrics-certs\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.233013 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjvz\" (UniqueName: \"kubernetes.io/projected/3462ad7c-24f3-4c73-990d-a0f471d08d1d-kube-api-access-ksjvz\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.239372 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3462ad7c-24f3-4c73-990d-a0f471d08d1d-cert\") pod \"controller-6968d8fdc4-hvqbg\" (UID: \"3462ad7c-24f3-4c73-990d-a0f471d08d1d\") " pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.246353 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4z4s\" (UniqueName: \"kubernetes.io/projected/37969e5d-3111-45cc-a711-da443a473c52-kube-api-access-d4z4s\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.268963 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.496360 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hvqbg"] Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.515776 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.520874 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9eb36e6e-e554-4b1a-9750-cd81c4c8d985-metrics-certs\") pod \"frr-k8s-54s6j\" (UID: \"9eb36e6e-e554-4b1a-9750-cd81c4c8d985\") " pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.622535 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h"] Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.718742 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.719162 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.723811 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-metrics-certs\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.723884 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/37969e5d-3111-45cc-a711-da443a473c52-memberlist\") pod \"speaker-m77zw\" (UID: \"37969e5d-3111-45cc-a711-da443a473c52\") " pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.757196 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:04 crc kubenswrapper[4593]: I0129 11:11:04.850157 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m77zw" Jan 29 11:11:04 crc kubenswrapper[4593]: W0129 11:11:04.877134 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37969e5d_3111_45cc_a711_da443a473c52.slice/crio-c36f5b756d5f59f3c64e5d2c78c947ada68075f66368c2efc45a2bb45141ccb5 WatchSource:0}: Error finding container c36f5b756d5f59f3c64e5d2c78c947ada68075f66368c2efc45a2bb45141ccb5: Status 404 returned error can't find the container with id c36f5b756d5f59f3c64e5d2c78c947ada68075f66368c2efc45a2bb45141ccb5 Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.454748 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m77zw" event={"ID":"37969e5d-3111-45cc-a711-da443a473c52","Type":"ContainerStarted","Data":"da49a101b595e47000ffef939bc559d4f095da5a75f2d974d661e3b975516c67"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.455090 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m77zw" event={"ID":"37969e5d-3111-45cc-a711-da443a473c52","Type":"ContainerStarted","Data":"be297179f6d2b422103350b09de4b9b76026c9723c9cfd2f6d992b8bb2ed0691"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.455107 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m77zw" event={"ID":"37969e5d-3111-45cc-a711-da443a473c52","Type":"ContainerStarted","Data":"c36f5b756d5f59f3c64e5d2c78c947ada68075f66368c2efc45a2bb45141ccb5"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.455393 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-m77zw" Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.457253 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hvqbg" event={"ID":"3462ad7c-24f3-4c73-990d-a0f471d08d1d","Type":"ContainerStarted","Data":"ab41ed837969b02ad1310e3af6420286facfbf8c8ff6f3eeeba2d02457aa25b2"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.457296 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hvqbg" event={"ID":"3462ad7c-24f3-4c73-990d-a0f471d08d1d","Type":"ContainerStarted","Data":"fde2705cf396d756261abc7932844c7198e4b2c63b7935d628ca0c77e740d14f"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.457310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hvqbg" event={"ID":"3462ad7c-24f3-4c73-990d-a0f471d08d1d","Type":"ContainerStarted","Data":"ceb22d3eea8a11e5bbd98b0a2719c9fe00649a452d46e94bfbe80e4b69f88a81"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.457421 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.458696 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" event={"ID":"45d808cf-80c4-4f7b-a172-76e4ecd9e37b","Type":"ContainerStarted","Data":"bb473d1e9c034889468f435b70a468a54243aba4aec3ff16c21c09b1e2914d66"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.461217 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"f68ef21eb5b648b42a784e45953e8e91e591e2788890a8901af9e3bdc88172f8"} Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.535381 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-m77zw" podStartSLOduration=2.535357984 podStartE2EDuration="2.535357984s" podCreationTimestamp="2026-01-29 11:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:11:05.498013347 +0000 UTC m=+731.371047538" watchObservedRunningTime="2026-01-29 11:11:05.535357984 +0000 UTC m=+731.408392175" Jan 29 11:11:05 crc kubenswrapper[4593]: I0129 11:11:05.536665 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-hvqbg" podStartSLOduration=2.536659369 podStartE2EDuration="2.536659369s" podCreationTimestamp="2026-01-29 11:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:11:05.53114904 +0000 UTC m=+731.404183231" watchObservedRunningTime="2026-01-29 11:11:05.536659369 +0000 UTC m=+731.409693560" Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.523305 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" event={"ID":"45d808cf-80c4-4f7b-a172-76e4ecd9e37b","Type":"ContainerStarted","Data":"417b06ec496d9e33ef508a9a5eb79c9cd4c80fda52502e3d84e968f700ccb089"} Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.523922 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.525215 4593 generic.go:334] "Generic (PLEG): container finished" podID="9eb36e6e-e554-4b1a-9750-cd81c4c8d985" containerID="dfb27ea50318b4478862fccd52a5fefccc1ba739a62073569464ba01cca98a8e" exitCode=0 Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.525253 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerDied","Data":"dfb27ea50318b4478862fccd52a5fefccc1ba739a62073569464ba01cca98a8e"} Jan 29 11:11:13 crc kubenswrapper[4593]: I0129 11:11:13.548178 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" podStartSLOduration=2.365048033 podStartE2EDuration="10.548157712s" podCreationTimestamp="2026-01-29 11:11:03 +0000 UTC" firstStartedPulling="2026-01-29 11:11:04.632618312 +0000 UTC m=+730.505652503" lastFinishedPulling="2026-01-29 11:11:12.815727991 +0000 UTC m=+738.688762182" observedRunningTime="2026-01-29 11:11:13.543034134 +0000 UTC m=+739.416068335" watchObservedRunningTime="2026-01-29 11:11:13.548157712 +0000 UTC m=+739.421191903" Jan 29 11:11:14 crc kubenswrapper[4593]: I0129 11:11:14.274222 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-hvqbg" Jan 29 11:11:14 crc kubenswrapper[4593]: I0129 11:11:14.531721 4593 generic.go:334] "Generic (PLEG): container finished" podID="9eb36e6e-e554-4b1a-9750-cd81c4c8d985" containerID="60c8adf1de3cd4ec9fda6d23d3e35ec2660bce6b71ca05745cad2970c89c5e59" exitCode=0 Jan 29 11:11:14 crc kubenswrapper[4593]: I0129 11:11:14.532677 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerDied","Data":"60c8adf1de3cd4ec9fda6d23d3e35ec2660bce6b71ca05745cad2970c89c5e59"} Jan 29 11:11:15 crc kubenswrapper[4593]: I0129 11:11:15.541953 4593 generic.go:334] "Generic (PLEG): container finished" podID="9eb36e6e-e554-4b1a-9750-cd81c4c8d985" containerID="a691257679622b12c0c30b77e732c2da4a5c5f89ca173684b80680b82f49e173" exitCode=0 Jan 29 11:11:15 crc kubenswrapper[4593]: I0129 11:11:15.541998 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerDied","Data":"a691257679622b12c0c30b77e732c2da4a5c5f89ca173684b80680b82f49e173"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.555922 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"f4a6eee69aa21abde7a7382f10b3cfee8aa3fa419a520f709238bc39953e25f1"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.556237 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"2afd8f1ea5f7c176a015a86930077c08973a74690376e8246054566d18d12877"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.556249 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"abab468dc5e54306a35d20ff24be0f4739e779de410923a225f9d5d1fec78e0d"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.556257 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"aaf0b454a1aaeda4813d7fee96db1c3462a420a29ee8f7f3075266a386ddf639"} Jan 29 11:11:16 crc kubenswrapper[4593]: I0129 11:11:16.556265 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"ff46cc5a5ebdfa6fc97224c333c7c70ad8060803b3f4aaeb1a3415a9b9155697"} Jan 29 11:11:17 crc kubenswrapper[4593]: I0129 11:11:17.565688 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-54s6j" event={"ID":"9eb36e6e-e554-4b1a-9750-cd81c4c8d985","Type":"ContainerStarted","Data":"03af9554e98ea3d9085abb6ea4c6b02d486e4ee0a46c81b62c95e7f7787da7dc"} Jan 29 11:11:17 crc kubenswrapper[4593]: I0129 11:11:17.566802 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:17 crc kubenswrapper[4593]: I0129 11:11:17.590822 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-54s6j" podStartSLOduration=6.662991825 podStartE2EDuration="14.590798943s" podCreationTimestamp="2026-01-29 11:11:03 +0000 UTC" firstStartedPulling="2026-01-29 11:11:04.868089618 +0000 UTC m=+730.741123809" lastFinishedPulling="2026-01-29 11:11:12.795896736 +0000 UTC m=+738.668930927" observedRunningTime="2026-01-29 11:11:17.585904702 +0000 UTC m=+743.458938903" watchObservedRunningTime="2026-01-29 11:11:17.590798943 +0000 UTC m=+743.463833134" Jan 29 11:11:19 crc kubenswrapper[4593]: I0129 11:11:19.757967 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:19 crc kubenswrapper[4593]: I0129 11:11:19.796229 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:24 crc kubenswrapper[4593]: I0129 11:11:24.150167 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dj42h" Jan 29 11:11:24 crc kubenswrapper[4593]: I0129 11:11:24.857761 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-m77zw" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.704903 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.706183 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.709795 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.710348 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.712082 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9p9rv" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.729884 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.768146 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") pod \"openstack-operator-index-kxm2v\" (UID: \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\") " pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.869512 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") pod \"openstack-operator-index-kxm2v\" (UID: \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\") " pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:27 crc kubenswrapper[4593]: I0129 11:11:27.904518 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") pod \"openstack-operator-index-kxm2v\" (UID: \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\") " pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:28 crc kubenswrapper[4593]: I0129 11:11:28.027091 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:28 crc kubenswrapper[4593]: I0129 11:11:28.502451 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:28 crc kubenswrapper[4593]: I0129 11:11:28.650729 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxm2v" event={"ID":"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5","Type":"ContainerStarted","Data":"d68540f4c1d7fff55c5e6157f96ccd88b42798a1072e01f0dfe99dc863e2bfa1"} Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.035744 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.644662 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sbxwt"] Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.647402 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.657683 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sbxwt"] Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.768907 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvv6t\" (UniqueName: \"kubernetes.io/projected/0661b605-afb6-4341-9703-ea25a3afc19d-kube-api-access-gvv6t\") pod \"openstack-operator-index-sbxwt\" (UID: \"0661b605-afb6-4341-9703-ea25a3afc19d\") " pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.870482 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvv6t\" (UniqueName: \"kubernetes.io/projected/0661b605-afb6-4341-9703-ea25a3afc19d-kube-api-access-gvv6t\") pod \"openstack-operator-index-sbxwt\" (UID: \"0661b605-afb6-4341-9703-ea25a3afc19d\") " pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.890668 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvv6t\" (UniqueName: \"kubernetes.io/projected/0661b605-afb6-4341-9703-ea25a3afc19d-kube-api-access-gvv6t\") pod \"openstack-operator-index-sbxwt\" (UID: \"0661b605-afb6-4341-9703-ea25a3afc19d\") " pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:31 crc kubenswrapper[4593]: I0129 11:11:31.974167 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.580962 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sbxwt"] Jan 29 11:11:34 crc kubenswrapper[4593]: W0129 11:11:34.589149 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0661b605_afb6_4341_9703_ea25a3afc19d.slice/crio-71690bcd11fbc3d54cf07cff1aa7a7a034633c0514fffdefca4fdd0c8a7ab780 WatchSource:0}: Error finding container 71690bcd11fbc3d54cf07cff1aa7a7a034633c0514fffdefca4fdd0c8a7ab780: Status 404 returned error can't find the container with id 71690bcd11fbc3d54cf07cff1aa7a7a034633c0514fffdefca4fdd0c8a7ab780 Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.699590 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxm2v" event={"ID":"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5","Type":"ContainerStarted","Data":"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1"} Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.699605 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-kxm2v" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerName="registry-server" containerID="cri-o://a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" gracePeriod=2 Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.704747 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sbxwt" event={"ID":"0661b605-afb6-4341-9703-ea25a3afc19d","Type":"ContainerStarted","Data":"71690bcd11fbc3d54cf07cff1aa7a7a034633c0514fffdefca4fdd0c8a7ab780"} Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.767028 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-54s6j" Jan 29 11:11:34 crc kubenswrapper[4593]: I0129 11:11:34.798155 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-kxm2v" podStartSLOduration=2.690120556 podStartE2EDuration="7.798138202s" podCreationTimestamp="2026-01-29 11:11:27 +0000 UTC" firstStartedPulling="2026-01-29 11:11:28.520462309 +0000 UTC m=+754.393496500" lastFinishedPulling="2026-01-29 11:11:33.628479955 +0000 UTC m=+759.501514146" observedRunningTime="2026-01-29 11:11:34.747075736 +0000 UTC m=+760.620109927" watchObservedRunningTime="2026-01-29 11:11:34.798138202 +0000 UTC m=+760.671172403" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.090923 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.147245 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") pod \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\" (UID: \"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5\") " Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.152459 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w" (OuterVolumeSpecName: "kube-api-access-r2q4w") pod "7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" (UID: "7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5"). InnerVolumeSpecName "kube-api-access-r2q4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.249407 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2q4w\" (UniqueName: \"kubernetes.io/projected/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5-kube-api-access-r2q4w\") on node \"crc\" DevicePath \"\"" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.712486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sbxwt" event={"ID":"0661b605-afb6-4341-9703-ea25a3afc19d","Type":"ContainerStarted","Data":"9a696a11428c248a7b1d6ed9d4a2ec9d549276382fc56a651079e894a1eb7a0c"} Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714102 4593 generic.go:334] "Generic (PLEG): container finished" podID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerID="a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" exitCode=0 Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714143 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxm2v" event={"ID":"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5","Type":"ContainerDied","Data":"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1"} Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714192 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-kxm2v" event={"ID":"7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5","Type":"ContainerDied","Data":"d68540f4c1d7fff55c5e6157f96ccd88b42798a1072e01f0dfe99dc863e2bfa1"} Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714216 4593 scope.go:117] "RemoveContainer" containerID="a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.714692 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-kxm2v" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.730396 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sbxwt" podStartSLOduration=4.483980676 podStartE2EDuration="4.730375178s" podCreationTimestamp="2026-01-29 11:11:31 +0000 UTC" firstStartedPulling="2026-01-29 11:11:34.593371711 +0000 UTC m=+760.466405902" lastFinishedPulling="2026-01-29 11:11:34.839766213 +0000 UTC m=+760.712800404" observedRunningTime="2026-01-29 11:11:35.729875494 +0000 UTC m=+761.602909695" watchObservedRunningTime="2026-01-29 11:11:35.730375178 +0000 UTC m=+761.603409369" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.740988 4593 scope.go:117] "RemoveContainer" containerID="a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" Jan 29 11:11:35 crc kubenswrapper[4593]: E0129 11:11:35.741471 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1\": container with ID starting with a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1 not found: ID does not exist" containerID="a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.741573 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1"} err="failed to get container status \"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1\": rpc error: code = NotFound desc = could not find container \"a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1\": container with ID starting with a0586d848e5813047592521239aecef586bd90512aeec3fbe57492fc9eaaeab1 not found: ID does not exist" Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.752327 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:35 crc kubenswrapper[4593]: I0129 11:11:35.756455 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-kxm2v"] Jan 29 11:11:37 crc kubenswrapper[4593]: I0129 11:11:37.084598 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" path="/var/lib/kubelet/pods/7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5/volumes" Jan 29 11:11:39 crc kubenswrapper[4593]: I0129 11:11:39.805825 4593 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 11:11:41 crc kubenswrapper[4593]: I0129 11:11:41.975745 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:41 crc kubenswrapper[4593]: I0129 11:11:41.976016 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:42 crc kubenswrapper[4593]: I0129 11:11:42.042887 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:42 crc kubenswrapper[4593]: I0129 11:11:42.783334 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-sbxwt" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.077721 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc"] Jan 29 11:11:44 crc kubenswrapper[4593]: E0129 11:11:44.078214 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerName="registry-server" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.078226 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerName="registry-server" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.078369 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bccc6ff-e749-4a2b-a900-87e5ea2bcaa5" containerName="registry-server" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.079293 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.082202 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-l67nj" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.087108 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc"] Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.180499 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.180559 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.180732 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.281759 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.281825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.281846 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.282462 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.282516 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.305372 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") pod \"91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.399406 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.613066 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc"] Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.776580 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerStarted","Data":"49810152f3eae5df3cd44041b27b8d1aa920d4dabd2d3cd1fd576348c19adca0"} Jan 29 11:11:44 crc kubenswrapper[4593]: I0129 11:11:44.776620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerStarted","Data":"5d6dd77b97f1625ba0241d533476e086d054fbdffd6b227fc9db20889d1914c3"} Jan 29 11:11:45 crc kubenswrapper[4593]: I0129 11:11:45.784049 4593 generic.go:334] "Generic (PLEG): container finished" podID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerID="49810152f3eae5df3cd44041b27b8d1aa920d4dabd2d3cd1fd576348c19adca0" exitCode=0 Jan 29 11:11:45 crc kubenswrapper[4593]: I0129 11:11:45.784121 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerDied","Data":"49810152f3eae5df3cd44041b27b8d1aa920d4dabd2d3cd1fd576348c19adca0"} Jan 29 11:11:46 crc kubenswrapper[4593]: I0129 11:11:46.794128 4593 generic.go:334] "Generic (PLEG): container finished" podID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerID="56bc419c08dbd0401bac21f6b2226460477de8cd20a4a5bb2aa955c2785709aa" exitCode=0 Jan 29 11:11:46 crc kubenswrapper[4593]: I0129 11:11:46.794315 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerDied","Data":"56bc419c08dbd0401bac21f6b2226460477de8cd20a4a5bb2aa955c2785709aa"} Jan 29 11:11:47 crc kubenswrapper[4593]: I0129 11:11:47.813521 4593 generic.go:334] "Generic (PLEG): container finished" podID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerID="59edbce0d09644e6eb3a08d35e615c9401aa50707044d47ae64393a5974d0edc" exitCode=0 Jan 29 11:11:47 crc kubenswrapper[4593]: I0129 11:11:47.813573 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerDied","Data":"59edbce0d09644e6eb3a08d35e615c9401aa50707044d47ae64393a5974d0edc"} Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.050333 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.226698 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") pod \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.226808 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") pod \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.227266 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") pod \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\" (UID: \"d389d4ca-e0e5-4a15-8ff2-afa4745998fa\") " Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.227382 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle" (OuterVolumeSpecName: "bundle") pod "d389d4ca-e0e5-4a15-8ff2-afa4745998fa" (UID: "d389d4ca-e0e5-4a15-8ff2-afa4745998fa"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.227918 4593 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.232960 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b" (OuterVolumeSpecName: "kube-api-access-jbh6b") pod "d389d4ca-e0e5-4a15-8ff2-afa4745998fa" (UID: "d389d4ca-e0e5-4a15-8ff2-afa4745998fa"). InnerVolumeSpecName "kube-api-access-jbh6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.242816 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util" (OuterVolumeSpecName: "util") pod "d389d4ca-e0e5-4a15-8ff2-afa4745998fa" (UID: "d389d4ca-e0e5-4a15-8ff2-afa4745998fa"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.329604 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbh6b\" (UniqueName: \"kubernetes.io/projected/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-kube-api-access-jbh6b\") on node \"crc\" DevicePath \"\"" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.329934 4593 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d389d4ca-e0e5-4a15-8ff2-afa4745998fa-util\") on node \"crc\" DevicePath \"\"" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.833338 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" event={"ID":"d389d4ca-e0e5-4a15-8ff2-afa4745998fa","Type":"ContainerDied","Data":"5d6dd77b97f1625ba0241d533476e086d054fbdffd6b227fc9db20889d1914c3"} Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.833389 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d6dd77b97f1625ba0241d533476e086d054fbdffd6b227fc9db20889d1914c3" Jan 29 11:11:49 crc kubenswrapper[4593]: I0129 11:11:49.833453 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.175466 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7"] Jan 29 11:11:56 crc kubenswrapper[4593]: E0129 11:11:56.176057 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="pull" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176072 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="pull" Jan 29 11:11:56 crc kubenswrapper[4593]: E0129 11:11:56.176087 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="extract" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176096 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="extract" Jan 29 11:11:56 crc kubenswrapper[4593]: E0129 11:11:56.176120 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="util" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176128 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="util" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176251 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d389d4ca-e0e5-4a15-8ff2-afa4745998fa" containerName="extract" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.176800 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.189502 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-45997" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.217237 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7"] Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.324127 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwkl6\" (UniqueName: \"kubernetes.io/projected/c8e623f1-2830-4c78-b17a-6000f32978a3-kube-api-access-jwkl6\") pod \"openstack-operator-controller-init-55ccc59995-d7xm7\" (UID: \"c8e623f1-2830-4c78-b17a-6000f32978a3\") " pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.425862 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwkl6\" (UniqueName: \"kubernetes.io/projected/c8e623f1-2830-4c78-b17a-6000f32978a3-kube-api-access-jwkl6\") pod \"openstack-operator-controller-init-55ccc59995-d7xm7\" (UID: \"c8e623f1-2830-4c78-b17a-6000f32978a3\") " pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.449230 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwkl6\" (UniqueName: \"kubernetes.io/projected/c8e623f1-2830-4c78-b17a-6000f32978a3-kube-api-access-jwkl6\") pod \"openstack-operator-controller-init-55ccc59995-d7xm7\" (UID: \"c8e623f1-2830-4c78-b17a-6000f32978a3\") " pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.495209 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:11:56 crc kubenswrapper[4593]: I0129 11:11:56.962296 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7"] Jan 29 11:11:57 crc kubenswrapper[4593]: I0129 11:11:57.881844 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" event={"ID":"c8e623f1-2830-4c78-b17a-6000f32978a3","Type":"ContainerStarted","Data":"a9d11ab8be468bada64bb970bd51e89c9dfae48c3df541beddb88eefd0b0d741"} Jan 29 11:12:03 crc kubenswrapper[4593]: I0129 11:12:03.946030 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:12:03 crc kubenswrapper[4593]: I0129 11:12:03.946646 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:12:04 crc kubenswrapper[4593]: I0129 11:12:04.932152 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" event={"ID":"c8e623f1-2830-4c78-b17a-6000f32978a3","Type":"ContainerStarted","Data":"a9d74499a95a4b3430bb3b0d4471e5f5640e815956d1986537d55802862f9574"} Jan 29 11:12:04 crc kubenswrapper[4593]: I0129 11:12:04.932538 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:12:04 crc kubenswrapper[4593]: I0129 11:12:04.962241 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" podStartSLOduration=2.191866503 podStartE2EDuration="8.96222711s" podCreationTimestamp="2026-01-29 11:11:56 +0000 UTC" firstStartedPulling="2026-01-29 11:11:56.965606074 +0000 UTC m=+782.838640255" lastFinishedPulling="2026-01-29 11:12:03.735966671 +0000 UTC m=+789.609000862" observedRunningTime="2026-01-29 11:12:04.961241616 +0000 UTC m=+790.834275807" watchObservedRunningTime="2026-01-29 11:12:04.96222711 +0000 UTC m=+790.835261291" Jan 29 11:12:16 crc kubenswrapper[4593]: I0129 11:12:16.499997 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55ccc59995-d7xm7" Jan 29 11:12:33 crc kubenswrapper[4593]: I0129 11:12:33.946445 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:12:33 crc kubenswrapper[4593]: I0129 11:12:33.947068 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.286141 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.286988 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.290625 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wk95c" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.291397 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.292311 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.294871 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rqbh4" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.315131 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.323222 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.336255 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sp6q\" (UniqueName: \"kubernetes.io/projected/c5e6d3a8-d6d9-4445-9708-28b88928333e-kube-api-access-4sp6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7ns7q\" (UID: \"c5e6d3a8-d6d9-4445-9708-28b88928333e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.336362 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9spgf\" (UniqueName: \"kubernetes.io/projected/e35e9127-0337-4860-b938-bb477a408f1e-kube-api-access-9spgf\") pod \"cinder-operator-controller-manager-8d874c8fc-7hmqc\" (UID: \"e35e9127-0337-4860-b938-bb477a408f1e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.364136 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.364904 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.367197 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-shh6b" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.383507 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.390319 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.391283 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.395566 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lnr6s" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.430316 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.431267 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.437098 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sp6q\" (UniqueName: \"kubernetes.io/projected/c5e6d3a8-d6d9-4445-9708-28b88928333e-kube-api-access-4sp6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7ns7q\" (UID: \"c5e6d3a8-d6d9-4445-9708-28b88928333e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.437198 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8q5\" (UniqueName: \"kubernetes.io/projected/499923d8-4593-4225-bc4c-6166003a0bb3-kube-api-access-mb8q5\") pod \"glance-operator-controller-manager-8886f4c47-2ml7m\" (UID: \"499923d8-4593-4225-bc4c-6166003a0bb3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.437244 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xwf\" (UniqueName: \"kubernetes.io/projected/734187ee-1e17-4cdc-b3bb-cfbd6e424793-kube-api-access-k5xwf\") pod \"designate-operator-controller-manager-6d9697b7f4-xw2pz\" (UID: \"734187ee-1e17-4cdc-b3bb-cfbd6e424793\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.437276 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9spgf\" (UniqueName: \"kubernetes.io/projected/e35e9127-0337-4860-b938-bb477a408f1e-kube-api-access-9spgf\") pod \"cinder-operator-controller-manager-8d874c8fc-7hmqc\" (UID: \"e35e9127-0337-4860-b938-bb477a408f1e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.440554 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-csc5k" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.456729 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.469099 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sp6q\" (UniqueName: \"kubernetes.io/projected/c5e6d3a8-d6d9-4445-9708-28b88928333e-kube-api-access-4sp6q\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-7ns7q\" (UID: \"c5e6d3a8-d6d9-4445-9708-28b88928333e\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.475363 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9spgf\" (UniqueName: \"kubernetes.io/projected/e35e9127-0337-4860-b938-bb477a408f1e-kube-api-access-9spgf\") pod \"cinder-operator-controller-manager-8d874c8fc-7hmqc\" (UID: \"e35e9127-0337-4860-b938-bb477a408f1e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.498970 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.516128 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.516376 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.532653 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-m9h5b" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.535713 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.541552 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksppz\" (UniqueName: \"kubernetes.io/projected/50471b23-1d0d-4bd9-a66f-a89b3a39a612-kube-api-access-ksppz\") pod \"heat-operator-controller-manager-69d6db494d-xqcrc\" (UID: \"50471b23-1d0d-4bd9-a66f-a89b3a39a612\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.554580 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb8q5\" (UniqueName: \"kubernetes.io/projected/499923d8-4593-4225-bc4c-6166003a0bb3-kube-api-access-mb8q5\") pod \"glance-operator-controller-manager-8886f4c47-2ml7m\" (UID: \"499923d8-4593-4225-bc4c-6166003a0bb3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.554711 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5xwf\" (UniqueName: \"kubernetes.io/projected/734187ee-1e17-4cdc-b3bb-cfbd6e424793-kube-api-access-k5xwf\") pod \"designate-operator-controller-manager-6d9697b7f4-xw2pz\" (UID: \"734187ee-1e17-4cdc-b3bb-cfbd6e424793\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.554762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7vc\" (UniqueName: \"kubernetes.io/projected/50a8381e-e59b-4400-9209-c33ef4f99c23-kube-api-access-5t7vc\") pod \"horizon-operator-controller-manager-5fb775575f-98l2v\" (UID: \"50a8381e-e59b-4400-9209-c33ef4f99c23\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.557467 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.597572 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.597910 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-q26cz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.618464 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.618767 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.625392 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5xwf\" (UniqueName: \"kubernetes.io/projected/734187ee-1e17-4cdc-b3bb-cfbd6e424793-kube-api-access-k5xwf\") pod \"designate-operator-controller-manager-6d9697b7f4-xw2pz\" (UID: \"734187ee-1e17-4cdc-b3bb-cfbd6e424793\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.636416 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.656612 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksppz\" (UniqueName: \"kubernetes.io/projected/50471b23-1d0d-4bd9-a66f-a89b3a39a612-kube-api-access-ksppz\") pod \"heat-operator-controller-manager-69d6db494d-xqcrc\" (UID: \"50471b23-1d0d-4bd9-a66f-a89b3a39a612\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.656731 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7vc\" (UniqueName: \"kubernetes.io/projected/50a8381e-e59b-4400-9209-c33ef4f99c23-kube-api-access-5t7vc\") pod \"horizon-operator-controller-manager-5fb775575f-98l2v\" (UID: \"50a8381e-e59b-4400-9209-c33ef4f99c23\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.656770 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.656852 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gkt\" (UniqueName: \"kubernetes.io/projected/c2cda883-37e6-4c21-b320-4962ffdc98b3-kube-api-access-w5gkt\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.661268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb8q5\" (UniqueName: \"kubernetes.io/projected/499923d8-4593-4225-bc4c-6166003a0bb3-kube-api-access-mb8q5\") pod \"glance-operator-controller-manager-8886f4c47-2ml7m\" (UID: \"499923d8-4593-4225-bc4c-6166003a0bb3\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.675673 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.682670 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.696364 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7vc\" (UniqueName: \"kubernetes.io/projected/50a8381e-e59b-4400-9209-c33ef4f99c23-kube-api-access-5t7vc\") pod \"horizon-operator-controller-manager-5fb775575f-98l2v\" (UID: \"50a8381e-e59b-4400-9209-c33ef4f99c23\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.713007 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.723576 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.724160 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksppz\" (UniqueName: \"kubernetes.io/projected/50471b23-1d0d-4bd9-a66f-a89b3a39a612-kube-api-access-ksppz\") pod \"heat-operator-controller-manager-69d6db494d-xqcrc\" (UID: \"50471b23-1d0d-4bd9-a66f-a89b3a39a612\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.724523 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.730317 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4vqwx" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.742059 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.743331 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.746914 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.762315 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k46bz\" (UniqueName: \"kubernetes.io/projected/812ebcfb-766d-4a1b-aaaa-2dd5a96ce047-kube-api-access-k46bz\") pod \"ironic-operator-controller-manager-5f4b8bd54d-t584q\" (UID: \"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.762415 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.762469 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rtrkb" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.762470 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gkt\" (UniqueName: \"kubernetes.io/projected/c2cda883-37e6-4c21-b320-4962ffdc98b3-kube-api-access-w5gkt\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: E0129 11:12:34.763653 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:34 crc kubenswrapper[4593]: E0129 11:12:34.763714 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:35.263695675 +0000 UTC m=+821.136729866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.763911 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.781075 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.828725 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.829728 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.834260 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-29ncp" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.844867 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.845753 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.854072 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gkt\" (UniqueName: \"kubernetes.io/projected/c2cda883-37e6-4c21-b320-4962ffdc98b3-kube-api-access-w5gkt\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.854528 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-ttrjz" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.861725 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.863976 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsxk\" (UniqueName: \"kubernetes.io/projected/0881deda-c42a-48d8-9059-b7eaf66c0f9f-kube-api-access-ptsxk\") pod \"manila-operator-controller-manager-7dd968899f-c89cq\" (UID: \"0881deda-c42a-48d8-9059-b7eaf66c0f9f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.864038 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbs8t\" (UniqueName: \"kubernetes.io/projected/62efedcb-a194-4692-8e84-a0da7777a400-kube-api-access-sbs8t\") pod \"mariadb-operator-controller-manager-67bf948998-zx6r8\" (UID: \"62efedcb-a194-4692-8e84-a0da7777a400\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.864113 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9lzd\" (UniqueName: \"kubernetes.io/projected/cdb96936-cd34-44fd-94b5-5570688fdfe6-kube-api-access-n9lzd\") pod \"keystone-operator-controller-manager-84f48565d4-xf5fn\" (UID: \"cdb96936-cd34-44fd-94b5-5570688fdfe6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.864176 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k46bz\" (UniqueName: \"kubernetes.io/projected/812ebcfb-766d-4a1b-aaaa-2dd5a96ce047-kube-api-access-k46bz\") pod \"ironic-operator-controller-manager-5f4b8bd54d-t584q\" (UID: \"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.880902 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.883113 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.899867 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.900832 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.912235 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pv9gb" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.963454 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k46bz\" (UniqueName: \"kubernetes.io/projected/812ebcfb-766d-4a1b-aaaa-2dd5a96ce047-kube-api-access-k46bz\") pod \"ironic-operator-controller-manager-5f4b8bd54d-t584q\" (UID: \"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.965732 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9lzd\" (UniqueName: \"kubernetes.io/projected/cdb96936-cd34-44fd-94b5-5570688fdfe6-kube-api-access-n9lzd\") pod \"keystone-operator-controller-manager-84f48565d4-xf5fn\" (UID: \"cdb96936-cd34-44fd-94b5-5570688fdfe6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.965852 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptsxk\" (UniqueName: \"kubernetes.io/projected/0881deda-c42a-48d8-9059-b7eaf66c0f9f-kube-api-access-ptsxk\") pod \"manila-operator-controller-manager-7dd968899f-c89cq\" (UID: \"0881deda-c42a-48d8-9059-b7eaf66c0f9f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.965891 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbs8t\" (UniqueName: \"kubernetes.io/projected/62efedcb-a194-4692-8e84-a0da7777a400-kube-api-access-sbs8t\") pod \"mariadb-operator-controller-manager-67bf948998-zx6r8\" (UID: \"62efedcb-a194-4692-8e84-a0da7777a400\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.976511 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.983711 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p"] Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.989285 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:34 crc kubenswrapper[4593]: I0129 11:12:34.994785 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kfsxd" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.002733 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbs8t\" (UniqueName: \"kubernetes.io/projected/62efedcb-a194-4692-8e84-a0da7777a400-kube-api-access-sbs8t\") pod \"mariadb-operator-controller-manager-67bf948998-zx6r8\" (UID: \"62efedcb-a194-4692-8e84-a0da7777a400\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.015546 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.035058 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.035163 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.039515 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.040188 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.041032 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-v2cqr" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.043230 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.048402 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.048657 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-28sbr" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.063135 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.067542 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhs7\" (UniqueName: \"kubernetes.io/projected/336c4e93-7d0b-4570-aafc-22e0f812db12-kube-api-access-qjhs7\") pod \"neutron-operator-controller-manager-585dbc889-qt87l\" (UID: \"336c4e93-7d0b-4570-aafc-22e0f812db12\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.067800 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptsxk\" (UniqueName: \"kubernetes.io/projected/0881deda-c42a-48d8-9059-b7eaf66c0f9f-kube-api-access-ptsxk\") pod \"manila-operator-controller-manager-7dd968899f-c89cq\" (UID: \"0881deda-c42a-48d8-9059-b7eaf66c0f9f\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.069127 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9lzd\" (UniqueName: \"kubernetes.io/projected/cdb96936-cd34-44fd-94b5-5570688fdfe6-kube-api-access-n9lzd\") pod \"keystone-operator-controller-manager-84f48565d4-xf5fn\" (UID: \"cdb96936-cd34-44fd-94b5-5570688fdfe6\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.073559 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-885pn"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.074499 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.077991 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-ztdjm" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.093592 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-885pn"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.093660 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.120250 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.122177 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.122277 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.126174 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j4vnr" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.144202 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.172064 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173024 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173091 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhs7\" (UniqueName: \"kubernetes.io/projected/336c4e93-7d0b-4570-aafc-22e0f812db12-kube-api-access-qjhs7\") pod \"neutron-operator-controller-manager-585dbc889-qt87l\" (UID: \"336c4e93-7d0b-4570-aafc-22e0f812db12\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173149 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nqmf\" (UniqueName: \"kubernetes.io/projected/9b88fe2c-a82a-4284-961a-8af3818815ec-kube-api-access-5nqmf\") pod \"ovn-operator-controller-manager-788c46999f-885pn\" (UID: \"9b88fe2c-a82a-4284-961a-8af3818815ec\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173182 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173211 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2k2v\" (UniqueName: \"kubernetes.io/projected/2c7ec826-43f0-49f3-9d96-4330427e4ed9-kube-api-access-g2k2v\") pod \"placement-operator-controller-manager-5b964cf4cd-kttv8\" (UID: \"2c7ec826-43f0-49f3-9d96-4330427e4ed9\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173237 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjf68\" (UniqueName: \"kubernetes.io/projected/40ab1792-0354-4c78-ac44-a217fbc610a9-kube-api-access-mjf68\") pod \"nova-operator-controller-manager-55bff696bd-8kf6p\" (UID: \"40ab1792-0354-4c78-ac44-a217fbc610a9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173283 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkf7m\" (UniqueName: \"kubernetes.io/projected/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-kube-api-access-bkf7m\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.173320 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dhm\" (UniqueName: \"kubernetes.io/projected/ba6fb45a-59ff-42ee-acb0-0ee43d001e1e-kube-api-access-89dhm\") pod \"octavia-operator-controller-manager-6687f8d877-9dbds\" (UID: \"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.181816 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.182212 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.182303 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-drg7l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.230116 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhs7\" (UniqueName: \"kubernetes.io/projected/336c4e93-7d0b-4570-aafc-22e0f812db12-kube-api-access-qjhs7\") pod \"neutron-operator-controller-manager-585dbc889-qt87l\" (UID: \"336c4e93-7d0b-4570-aafc-22e0f812db12\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.235494 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.253342 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277650 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nqmf\" (UniqueName: \"kubernetes.io/projected/9b88fe2c-a82a-4284-961a-8af3818815ec-kube-api-access-5nqmf\") pod \"ovn-operator-controller-manager-788c46999f-885pn\" (UID: \"9b88fe2c-a82a-4284-961a-8af3818815ec\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277723 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277760 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2k2v\" (UniqueName: \"kubernetes.io/projected/2c7ec826-43f0-49f3-9d96-4330427e4ed9-kube-api-access-g2k2v\") pod \"placement-operator-controller-manager-5b964cf4cd-kttv8\" (UID: \"2c7ec826-43f0-49f3-9d96-4330427e4ed9\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277793 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjf68\" (UniqueName: \"kubernetes.io/projected/40ab1792-0354-4c78-ac44-a217fbc610a9-kube-api-access-mjf68\") pod \"nova-operator-controller-manager-55bff696bd-8kf6p\" (UID: \"40ab1792-0354-4c78-ac44-a217fbc610a9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkf7m\" (UniqueName: \"kubernetes.io/projected/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-kube-api-access-bkf7m\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277913 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89dhm\" (UniqueName: \"kubernetes.io/projected/ba6fb45a-59ff-42ee-acb0-0ee43d001e1e-kube-api-access-89dhm\") pod \"octavia-operator-controller-manager-6687f8d877-9dbds\" (UID: \"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.277943 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.278090 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.281409 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.281510 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:36.281481213 +0000 UTC m=+822.154515404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.281536 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:35.781522714 +0000 UTC m=+821.654556905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.287606 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.290148 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.307087 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gjfr9" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.345187 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dhm\" (UniqueName: \"kubernetes.io/projected/ba6fb45a-59ff-42ee-acb0-0ee43d001e1e-kube-api-access-89dhm\") pod \"octavia-operator-controller-manager-6687f8d877-9dbds\" (UID: \"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.355281 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkf7m\" (UniqueName: \"kubernetes.io/projected/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-kube-api-access-bkf7m\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.356204 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nqmf\" (UniqueName: \"kubernetes.io/projected/9b88fe2c-a82a-4284-961a-8af3818815ec-kube-api-access-5nqmf\") pod \"ovn-operator-controller-manager-788c46999f-885pn\" (UID: \"9b88fe2c-a82a-4284-961a-8af3818815ec\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.356273 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.357403 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjf68\" (UniqueName: \"kubernetes.io/projected/40ab1792-0354-4c78-ac44-a217fbc610a9-kube-api-access-mjf68\") pod \"nova-operator-controller-manager-55bff696bd-8kf6p\" (UID: \"40ab1792-0354-4c78-ac44-a217fbc610a9\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.357560 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.377737 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8xtx9" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.378875 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.383789 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5npq\" (UniqueName: \"kubernetes.io/projected/0e86fa54-1e41-4bb9-86c7-a0ea0d919270-kube-api-access-x5npq\") pod \"swift-operator-controller-manager-68fc8c869-k4b7q\" (UID: \"0e86fa54-1e41-4bb9-86c7-a0ea0d919270\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.386127 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2k2v\" (UniqueName: \"kubernetes.io/projected/2c7ec826-43f0-49f3-9d96-4330427e4ed9-kube-api-access-g2k2v\") pod \"placement-operator-controller-manager-5b964cf4cd-kttv8\" (UID: \"2c7ec826-43f0-49f3-9d96-4330427e4ed9\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.452462 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.473552 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.478199 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.487406 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5l7\" (UniqueName: \"kubernetes.io/projected/b45fb247-850e-40b4-b62e-8551d55efcba-kube-api-access-ns5l7\") pod \"test-operator-controller-manager-56f8bfcd9f-ltfr4\" (UID: \"b45fb247-850e-40b4-b62e-8551d55efcba\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.487506 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jczk\" (UniqueName: \"kubernetes.io/projected/ea8d9bb8-bdec-453d-a308-28b962971254-kube-api-access-7jczk\") pod \"telemetry-operator-controller-manager-64b5b76f97-z4mp8\" (UID: \"ea8d9bb8-bdec-453d-a308-28b962971254\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.487568 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5npq\" (UniqueName: \"kubernetes.io/projected/0e86fa54-1e41-4bb9-86c7-a0ea0d919270-kube-api-access-x5npq\") pod \"swift-operator-controller-manager-68fc8c869-k4b7q\" (UID: \"0e86fa54-1e41-4bb9-86c7-a0ea0d919270\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.504351 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.522688 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zmssx"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.523991 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.526507 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5npq\" (UniqueName: \"kubernetes.io/projected/0e86fa54-1e41-4bb9-86c7-a0ea0d919270-kube-api-access-x5npq\") pod \"swift-operator-controller-manager-68fc8c869-k4b7q\" (UID: \"0e86fa54-1e41-4bb9-86c7-a0ea0d919270\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.527039 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9hpkh" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.560452 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zmssx"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.580968 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.581991 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.583985 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lj4r8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.584179 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.584303 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.591317 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jczk\" (UniqueName: \"kubernetes.io/projected/ea8d9bb8-bdec-453d-a308-28b962971254-kube-api-access-7jczk\") pod \"telemetry-operator-controller-manager-64b5b76f97-z4mp8\" (UID: \"ea8d9bb8-bdec-453d-a308-28b962971254\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.591777 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.591877 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gqb\" (UniqueName: \"kubernetes.io/projected/0259a320-8da9-48e5-8d73-25b09774d9c1-kube-api-access-s4gqb\") pod \"watcher-operator-controller-manager-564965969-zmssx\" (UID: \"0259a320-8da9-48e5-8d73-25b09774d9c1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.592037 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5l7\" (UniqueName: \"kubernetes.io/projected/b45fb247-850e-40b4-b62e-8551d55efcba-kube-api-access-ns5l7\") pod \"test-operator-controller-manager-56f8bfcd9f-ltfr4\" (UID: \"b45fb247-850e-40b4-b62e-8551d55efcba\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.592139 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.592261 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbkf\" (UniqueName: \"kubernetes.io/projected/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-kube-api-access-rxbkf\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.616411 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.627369 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.630779 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5l7\" (UniqueName: \"kubernetes.io/projected/b45fb247-850e-40b4-b62e-8551d55efcba-kube-api-access-ns5l7\") pod \"test-operator-controller-manager-56f8bfcd9f-ltfr4\" (UID: \"b45fb247-850e-40b4-b62e-8551d55efcba\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.651074 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.651987 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.658589 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jczk\" (UniqueName: \"kubernetes.io/projected/ea8d9bb8-bdec-453d-a308-28b962971254-kube-api-access-7jczk\") pod \"telemetry-operator-controller-manager-64b5b76f97-z4mp8\" (UID: \"ea8d9bb8-bdec-453d-a308-28b962971254\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.658863 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-d9bh5" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.682020 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.694956 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.695046 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:36.195027336 +0000 UTC m=+822.068061527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695314 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4gqb\" (UniqueName: \"kubernetes.io/projected/0259a320-8da9-48e5-8d73-25b09774d9c1-kube-api-access-s4gqb\") pod \"watcher-operator-controller-manager-564965969-zmssx\" (UID: \"0259a320-8da9-48e5-8d73-25b09774d9c1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695359 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54qbk\" (UniqueName: \"kubernetes.io/projected/2f32633b-0490-4885-9543-a140c807c742-kube-api-access-54qbk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tfkk2\" (UID: \"2f32633b-0490-4885-9543-a140c807c742\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695397 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.695431 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbkf\" (UniqueName: \"kubernetes.io/projected/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-kube-api-access-rxbkf\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.695959 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.695997 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:36.195984802 +0000 UTC m=+822.069018993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.702854 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.733104 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4gqb\" (UniqueName: \"kubernetes.io/projected/0259a320-8da9-48e5-8d73-25b09774d9c1-kube-api-access-s4gqb\") pod \"watcher-operator-controller-manager-564965969-zmssx\" (UID: \"0259a320-8da9-48e5-8d73-25b09774d9c1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.734507 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.745300 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbkf\" (UniqueName: \"kubernetes.io/projected/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-kube-api-access-rxbkf\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.796622 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.796725 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54qbk\" (UniqueName: \"kubernetes.io/projected/2f32633b-0490-4885-9543-a140c807c742-kube-api-access-54qbk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tfkk2\" (UID: \"2f32633b-0490-4885-9543-a140c807c742\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.798384 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: E0129 11:12:35.798475 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:36.798422278 +0000 UTC m=+822.671456469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.822653 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:12:35 crc kubenswrapper[4593]: W0129 11:12:35.839673 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5e6d3a8_d6d9_4445_9708_28b88928333e.slice/crio-b7c726b993850d0ecec767a1630667e32d3392bb46f7fbc47e63b9fc069a3777 WatchSource:0}: Error finding container b7c726b993850d0ecec767a1630667e32d3392bb46f7fbc47e63b9fc069a3777: Status 404 returned error can't find the container with id b7c726b993850d0ecec767a1630667e32d3392bb46f7fbc47e63b9fc069a3777 Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.841596 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54qbk\" (UniqueName: \"kubernetes.io/projected/2f32633b-0490-4885-9543-a140c807c742-kube-api-access-54qbk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tfkk2\" (UID: \"2f32633b-0490-4885-9543-a140c807c742\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:35 crc kubenswrapper[4593]: W0129 11:12:35.858805 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode35e9127_0337_4860_b938_bb477a408f1e.slice/crio-786f20cac1637efa8bbcc8dfc9f4b935d7a0c790d1615085c5a8596bc0419305 WatchSource:0}: Error finding container 786f20cac1637efa8bbcc8dfc9f4b935d7a0c790d1615085c5a8596bc0419305: Status 404 returned error can't find the container with id 786f20cac1637efa8bbcc8dfc9f4b935d7a0c790d1615085c5a8596bc0419305 Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.862469 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.892496 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc"] Jan 29 11:12:35 crc kubenswrapper[4593]: I0129 11:12:35.898970 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.019579 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.065335 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.116500 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" event={"ID":"c5e6d3a8-d6d9-4445-9708-28b88928333e","Type":"ContainerStarted","Data":"b7c726b993850d0ecec767a1630667e32d3392bb46f7fbc47e63b9fc069a3777"} Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.117389 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" event={"ID":"734187ee-1e17-4cdc-b3bb-cfbd6e424793","Type":"ContainerStarted","Data":"965153987ca6aac88bec8776c6ea464b3f89b694a3564f1126b3063b735214df"} Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.118457 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" event={"ID":"e35e9127-0337-4860-b938-bb477a408f1e","Type":"ContainerStarted","Data":"786f20cac1637efa8bbcc8dfc9f4b935d7a0c790d1615085c5a8596bc0419305"} Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.201220 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.201562 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.201892 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.202011 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:37.201996953 +0000 UTC m=+823.075031144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.202439 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.202524 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:37.202515556 +0000 UTC m=+823.075549747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.304450 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.304619 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.304695 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:38.304675515 +0000 UTC m=+824.177709706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.372523 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.387007 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.395794 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.406103 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq"] Jan 29 11:12:36 crc kubenswrapper[4593]: W0129 11:12:36.414393 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod812ebcfb_766d_4a1b_aaaa_2dd5a96ce047.slice/crio-df5f36461026f996f4b63408ff77299522c8fb4eaca84b6d9fff3ed4bc3b7164 WatchSource:0}: Error finding container df5f36461026f996f4b63408ff77299522c8fb4eaca84b6d9fff3ed4bc3b7164: Status 404 returned error can't find the container with id df5f36461026f996f4b63408ff77299522c8fb4eaca84b6d9fff3ed4bc3b7164 Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.446182 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.456064 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.745166 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.762406 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.809910 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.815297 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.815489 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.815536 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:38.815521423 +0000 UTC m=+824.688555614 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.846253 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-885pn"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.868763 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.874062 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.877938 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.882739 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p"] Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.886971 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ns5l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-ltfr4_openstack-operators(b45fb247-850e-40b4-b62e-8551d55efcba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.888196 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" podUID="b45fb247-850e-40b4-b62e-8551d55efcba" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.905004 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4gqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-zmssx_openstack-operators(0259a320-8da9-48e5-8d73-25b09774d9c1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.907322 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.907485 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjf68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-8kf6p_openstack-operators(40ab1792-0354-4c78-ac44-a217fbc610a9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.907486 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89dhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-9dbds_openstack-operators(ba6fb45a-59ff-42ee-acb0-0ee43d001e1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.908709 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" podUID="40ab1792-0354-4c78-ac44-a217fbc610a9" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.908740 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.912341 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54qbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tfkk2_openstack-operators(2f32633b-0490-4885-9543-a140c807c742): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 11:12:36 crc kubenswrapper[4593]: E0129 11:12:36.915268 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.934310 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.942721 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zmssx"] Jan 29 11:12:36 crc kubenswrapper[4593]: I0129 11:12:36.955187 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2"] Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.128750 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" event={"ID":"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e","Type":"ContainerStarted","Data":"b6904b122aa43e6bfe8e8f8a8012d3bcb9a23b1ca090ef3aad98496517e2db56"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.129856 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.130316 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" event={"ID":"cdb96936-cd34-44fd-94b5-5570688fdfe6","Type":"ContainerStarted","Data":"b57c48584683a7b772fb34becddc58db9678326e8edb615515f279fff1c48fa7"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.133709 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" event={"ID":"2c7ec826-43f0-49f3-9d96-4330427e4ed9","Type":"ContainerStarted","Data":"582c2d7f177ec4cfde444c5f91fb5f538f8433bdb119026844f9e6f8a9afdb15"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.135495 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" event={"ID":"0881deda-c42a-48d8-9059-b7eaf66c0f9f","Type":"ContainerStarted","Data":"4bc0aa79b3876fa5d3ab832ecbfad28227117613b1b79f5d10a9b94f8b4e877e"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.137016 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" event={"ID":"50a8381e-e59b-4400-9209-c33ef4f99c23","Type":"ContainerStarted","Data":"dd09c96251cf7561fa20be69218c5d25a25dba5a7216d037bb115aa599824c5b"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.157059 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" event={"ID":"336c4e93-7d0b-4570-aafc-22e0f812db12","Type":"ContainerStarted","Data":"a820fc0f0d271023af320c507058fdac3ab434ba6c76ffad7488457a52d75bd1"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.159048 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" event={"ID":"0259a320-8da9-48e5-8d73-25b09774d9c1","Type":"ContainerStarted","Data":"da266e037f1b44105a24231dff74753f4daa8e8e13109ed35943b4a4f035d3fc"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.162992 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.163993 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" event={"ID":"ea8d9bb8-bdec-453d-a308-28b962971254","Type":"ContainerStarted","Data":"8cd6cd11f94ddece266f00c5871f4c069288985d2333a6f1fd538ed5232edae2"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.179443 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" event={"ID":"499923d8-4593-4225-bc4c-6166003a0bb3","Type":"ContainerStarted","Data":"b695db3e07b3495e141f68edcb1032b6e88dbd5ce50caf474deafd692bb9303c"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.184735 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" event={"ID":"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047","Type":"ContainerStarted","Data":"df5f36461026f996f4b63408ff77299522c8fb4eaca84b6d9fff3ed4bc3b7164"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.193990 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" event={"ID":"2f32633b-0490-4885-9543-a140c807c742","Type":"ContainerStarted","Data":"57983b33b9c4365af458eb0a487a37e898ce0961793a79dcde8f7dee293c0035"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.195353 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.195680 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" event={"ID":"0e86fa54-1e41-4bb9-86c7-a0ea0d919270","Type":"ContainerStarted","Data":"7e199caad175b7645f2e173d45a257d98ed4b7bad605f6d3b4f4bb3eb3b6804b"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.201581 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" event={"ID":"50471b23-1d0d-4bd9-a66f-a89b3a39a612","Type":"ContainerStarted","Data":"29a4ccf3e7a9396fff270675aaf15dcb46f48c28d1f6813e5fcf208efd72db60"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.203491 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" event={"ID":"b45fb247-850e-40b4-b62e-8551d55efcba","Type":"ContainerStarted","Data":"fe7fa25a28f3eb925519b80a9193c791f8b156af0045d9f6e3d2f1039ec86900"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.212817 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" podUID="b45fb247-850e-40b4-b62e-8551d55efcba" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.214832 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" event={"ID":"40ab1792-0354-4c78-ac44-a217fbc610a9","Type":"ContainerStarted","Data":"0ede2967655f210367648677750ecf2a3054e4c19502eb303c694da0e5d91abc"} Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.216749 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" podUID="40ab1792-0354-4c78-ac44-a217fbc610a9" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.227202 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.227404 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.230195 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.230248 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:39.230232357 +0000 UTC m=+825.103266548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.230311 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:37 crc kubenswrapper[4593]: E0129 11:12:37.230371 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:39.230353051 +0000 UTC m=+825.103387272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.232032 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" event={"ID":"9b88fe2c-a82a-4284-961a-8af3818815ec","Type":"ContainerStarted","Data":"018f12c4d542f62ba0c41899892c28cfae8b1ba0a417cce1c065adabc73c7289"} Jan 29 11:12:37 crc kubenswrapper[4593]: I0129 11:12:37.233888 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" event={"ID":"62efedcb-a194-4692-8e84-a0da7777a400","Type":"ContainerStarted","Data":"dc08e9cc530f50716a46502f0ac25e8a9245724d249bdbf70860fbbffeb17f31"} Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.243508 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.244291 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" podUID="b45fb247-850e-40b4-b62e-8551d55efcba" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.249272 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.249323 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.249338 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" podUID="40ab1792-0354-4c78-ac44-a217fbc610a9" Jan 29 11:12:38 crc kubenswrapper[4593]: I0129 11:12:38.343429 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.343627 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.343692 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:42.343675151 +0000 UTC m=+828.216709342 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:38 crc kubenswrapper[4593]: I0129 11:12:38.850875 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.851104 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:38 crc kubenswrapper[4593]: E0129 11:12:38.851159 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:42.85114279 +0000 UTC m=+828.724176981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:39 crc kubenswrapper[4593]: I0129 11:12:39.257720 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:39 crc kubenswrapper[4593]: I0129 11:12:39.257804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:39 crc kubenswrapper[4593]: E0129 11:12:39.257896 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:39 crc kubenswrapper[4593]: E0129 11:12:39.257904 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:39 crc kubenswrapper[4593]: E0129 11:12:39.258011 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:43.25797989 +0000 UTC m=+829.131014081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:39 crc kubenswrapper[4593]: E0129 11:12:39.258088 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:43.258038822 +0000 UTC m=+829.131073073 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:42 crc kubenswrapper[4593]: I0129 11:12:42.421912 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:42 crc kubenswrapper[4593]: E0129 11:12:42.422233 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:42 crc kubenswrapper[4593]: E0129 11:12:42.422508 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:50.422474221 +0000 UTC m=+836.295508452 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:42 crc kubenswrapper[4593]: I0129 11:12:42.931149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:42 crc kubenswrapper[4593]: E0129 11:12:42.931398 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:42 crc kubenswrapper[4593]: E0129 11:12:42.931491 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:50.93146165 +0000 UTC m=+836.804495841 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:43 crc kubenswrapper[4593]: I0129 11:12:43.337930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:43 crc kubenswrapper[4593]: I0129 11:12:43.338008 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:43 crc kubenswrapper[4593]: E0129 11:12:43.338150 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:43 crc kubenswrapper[4593]: E0129 11:12:43.338237 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:51.338214368 +0000 UTC m=+837.211248559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:43 crc kubenswrapper[4593]: E0129 11:12:43.338154 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:43 crc kubenswrapper[4593]: E0129 11:12:43.338286 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:12:51.33827585 +0000 UTC m=+837.211310041 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:47 crc kubenswrapper[4593]: E0129 11:12:47.481089 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 29 11:12:47 crc kubenswrapper[4593]: E0129 11:12:47.481565 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mb8q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-2ml7m_openstack-operators(499923d8-4593-4225-bc4c-6166003a0bb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:47 crc kubenswrapper[4593]: E0129 11:12:47.482921 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" podUID="499923d8-4593-4225-bc4c-6166003a0bb3" Jan 29 11:12:48 crc kubenswrapper[4593]: E0129 11:12:48.314925 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" podUID="499923d8-4593-4225-bc4c-6166003a0bb3" Jan 29 11:12:50 crc kubenswrapper[4593]: I0129 11:12:50.448845 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.449002 4593 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.449456 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert podName:c2cda883-37e6-4c21-b320-4962ffdc98b3 nodeName:}" failed. No retries permitted until 2026-01-29 11:13:06.449427685 +0000 UTC m=+852.322461896 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert") pod "infra-operator-controller-manager-79955696d6-6zkvt" (UID: "c2cda883-37e6-4c21-b320-4962ffdc98b3") : secret "infra-operator-webhook-server-cert" not found Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.907738 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4" Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.907938 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5nqmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-885pn_openstack-operators(9b88fe2c-a82a-4284-961a-8af3818815ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.909160 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" podUID="9b88fe2c-a82a-4284-961a-8af3818815ec" Jan 29 11:12:50 crc kubenswrapper[4593]: I0129 11:12:50.955932 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.956058 4593 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:50 crc kubenswrapper[4593]: E0129 11:12:50.956123 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert podName:f6e2fc57-0cce-4f5a-bf3e-63efbfff1073 nodeName:}" failed. No retries permitted until 2026-01-29 11:13:06.956104104 +0000 UTC m=+852.829138295 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" (UID: "f6e2fc57-0cce-4f5a-bf3e-63efbfff1073") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.337170 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" podUID="9b88fe2c-a82a-4284-961a-8af3818815ec" Jan 29 11:12:51 crc kubenswrapper[4593]: I0129 11:12:51.361110 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:51 crc kubenswrapper[4593]: I0129 11:12:51.361182 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.361826 4593 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.361892 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:13:07.361873746 +0000 UTC m=+853.234907937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "webhook-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.361974 4593 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.362006 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs podName:960bb326-dc22-43e5-bc4f-05c9ce9e26a9 nodeName:}" failed. No retries permitted until 2026-01-29 11:13:07.36199746 +0000 UTC m=+853.235031641 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs") pod "openstack-operator-controller-manager-6d898fd894-sh94p" (UID: "960bb326-dc22-43e5-bc4f-05c9ce9e26a9") : secret "metrics-server-cert" not found Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.596771 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.596988 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7jczk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-z4mp8_openstack-operators(ea8d9bb8-bdec-453d-a308-28b962971254): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:51 crc kubenswrapper[4593]: E0129 11:12:51.598162 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" podUID="ea8d9bb8-bdec-453d-a308-28b962971254" Jan 29 11:12:52 crc kubenswrapper[4593]: E0129 11:12:52.343556 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" podUID="ea8d9bb8-bdec-453d-a308-28b962971254" Jan 29 11:12:54 crc kubenswrapper[4593]: E0129 11:12:54.358522 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 29 11:12:54 crc kubenswrapper[4593]: E0129 11:12:54.359087 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptsxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-c89cq_openstack-operators(0881deda-c42a-48d8-9059-b7eaf66c0f9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:54 crc kubenswrapper[4593]: E0129 11:12:54.360266 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" podUID="0881deda-c42a-48d8-9059-b7eaf66c0f9f" Jan 29 11:12:55 crc kubenswrapper[4593]: E0129 11:12:55.365010 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" podUID="0881deda-c42a-48d8-9059-b7eaf66c0f9f" Jan 29 11:12:55 crc kubenswrapper[4593]: E0129 11:12:55.568074 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 29 11:12:55 crc kubenswrapper[4593]: E0129 11:12:55.568294 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ksppz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-xqcrc_openstack-operators(50471b23-1d0d-4bd9-a66f-a89b3a39a612): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:55 crc kubenswrapper[4593]: E0129 11:12:55.570297 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" podUID="50471b23-1d0d-4bd9-a66f-a89b3a39a612" Jan 29 11:12:56 crc kubenswrapper[4593]: E0129 11:12:56.370453 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" podUID="50471b23-1d0d-4bd9-a66f-a89b3a39a612" Jan 29 11:12:57 crc kubenswrapper[4593]: E0129 11:12:57.752043 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 29 11:12:57 crc kubenswrapper[4593]: E0129 11:12:57.752243 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2k2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-kttv8_openstack-operators(2c7ec826-43f0-49f3-9d96-4330427e4ed9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:57 crc kubenswrapper[4593]: E0129 11:12:57.753319 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" podUID="2c7ec826-43f0-49f3-9d96-4330427e4ed9" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.291353 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.291598 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k46bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-t584q_openstack-operators(812ebcfb-766d-4a1b-aaaa-2dd5a96ce047): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.292816 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" podUID="812ebcfb-766d-4a1b-aaaa-2dd5a96ce047" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.382501 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" podUID="812ebcfb-766d-4a1b-aaaa-2dd5a96ce047" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.384273 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" podUID="2c7ec826-43f0-49f3-9d96-4330427e4ed9" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.859443 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.860668 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5npq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-k4b7q_openstack-operators(0e86fa54-1e41-4bb9-86c7-a0ea0d919270): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:12:58 crc kubenswrapper[4593]: E0129 11:12:58.861868 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" podUID="0e86fa54-1e41-4bb9-86c7-a0ea0d919270" Jan 29 11:12:59 crc kubenswrapper[4593]: E0129 11:12:59.388120 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" podUID="0e86fa54-1e41-4bb9-86c7-a0ea0d919270" Jan 29 11:13:01 crc kubenswrapper[4593]: E0129 11:13:01.539556 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 29 11:13:01 crc kubenswrapper[4593]: E0129 11:13:01.544706 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbs8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-zx6r8_openstack-operators(62efedcb-a194-4692-8e84-a0da7777a400): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:01 crc kubenswrapper[4593]: E0129 11:13:01.547560 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" podUID="62efedcb-a194-4692-8e84-a0da7777a400" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.119163 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.119851 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9lzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-xf5fn_openstack-operators(cdb96936-cd34-44fd-94b5-5570688fdfe6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.120996 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" podUID="cdb96936-cd34-44fd-94b5-5570688fdfe6" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.531686 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" podUID="62efedcb-a194-4692-8e84-a0da7777a400" Jan 29 11:13:02 crc kubenswrapper[4593]: E0129 11:13:02.539204 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" podUID="cdb96936-cd34-44fd-94b5-5570688fdfe6" Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.946547 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.946665 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.946733 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.947509 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:13:03 crc kubenswrapper[4593]: I0129 11:13:03.947704 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2" gracePeriod=600 Jan 29 11:13:05 crc kubenswrapper[4593]: I0129 11:13:05.554251 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2" exitCode=0 Jan 29 11:13:05 crc kubenswrapper[4593]: I0129 11:13:05.554324 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2"} Jan 29 11:13:05 crc kubenswrapper[4593]: I0129 11:13:05.554624 4593 scope.go:117] "RemoveContainer" containerID="ad7eaa6d8b75487d2b1860d56574f3e98a7f997d74c38ceba49998dcdb20364d" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.276902 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.277091 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89dhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-9dbds_openstack-operators(ba6fb45a-59ff-42ee-acb0-0ee43d001e1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.278294 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:13:06 crc kubenswrapper[4593]: I0129 11:13:06.500843 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:06 crc kubenswrapper[4593]: I0129 11:13:06.522248 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c2cda883-37e6-4c21-b320-4962ffdc98b3-cert\") pod \"infra-operator-controller-manager-79955696d6-6zkvt\" (UID: \"c2cda883-37e6-4c21-b320-4962ffdc98b3\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:06 crc kubenswrapper[4593]: I0129 11:13:06.739212 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-q26cz" Jan 29 11:13:06 crc kubenswrapper[4593]: I0129 11:13:06.747700 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.871468 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.871840 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s4gqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-zmssx_openstack-operators(0259a320-8da9-48e5-8d73-25b09774d9c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:06 crc kubenswrapper[4593]: E0129 11:13:06.874210 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.007367 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.011412 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6e2fc57-0cce-4f5a-bf3e-63efbfff1073-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb\" (UID: \"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.222357 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-28sbr" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.231028 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.413422 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.413804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.420416 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-webhook-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.420694 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/960bb326-dc22-43e5-bc4f-05c9ce9e26a9-metrics-certs\") pod \"openstack-operator-controller-manager-6d898fd894-sh94p\" (UID: \"960bb326-dc22-43e5-bc4f-05c9ce9e26a9\") " pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.544618 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lj4r8" Jan 29 11:13:07 crc kubenswrapper[4593]: I0129 11:13:07.553175 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:09 crc kubenswrapper[4593]: E0129 11:13:09.230008 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 29 11:13:09 crc kubenswrapper[4593]: E0129 11:13:09.230897 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-54qbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tfkk2_openstack-operators(2f32633b-0490-4885-9543-a140c807c742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:13:09 crc kubenswrapper[4593]: E0129 11:13:09.232110 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:13:09 crc kubenswrapper[4593]: I0129 11:13:09.670896 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt"] Jan 29 11:13:09 crc kubenswrapper[4593]: I0129 11:13:09.954602 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb"] Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.122818 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p"] Jan 29 11:13:10 crc kubenswrapper[4593]: W0129 11:13:10.152726 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod960bb326_dc22_43e5_bc4f_05c9ce9e26a9.slice/crio-a8ecc64fe66ad37e78eb694646bbd9238ebfb6f71be8ee350adf900b53337dce WatchSource:0}: Error finding container a8ecc64fe66ad37e78eb694646bbd9238ebfb6f71be8ee350adf900b53337dce: Status 404 returned error can't find the container with id a8ecc64fe66ad37e78eb694646bbd9238ebfb6f71be8ee350adf900b53337dce Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.677978 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" event={"ID":"0881deda-c42a-48d8-9059-b7eaf66c0f9f","Type":"ContainerStarted","Data":"e395f982bfa07a71d1aa775488c937505a4ada3659c8a3636bb859871634c770"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.679137 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.692050 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" event={"ID":"50471b23-1d0d-4bd9-a66f-a89b3a39a612","Type":"ContainerStarted","Data":"b231b187705c9af3e3ae611acabe98946d39cdff466dec66822fc7e563b85228"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.692317 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.701770 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" event={"ID":"50a8381e-e59b-4400-9209-c33ef4f99c23","Type":"ContainerStarted","Data":"ce69171986ca0b12a3f4ac966fd11a910974d71a94f7229909ad2a3889479412"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.702575 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.710398 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" event={"ID":"40ab1792-0354-4c78-ac44-a217fbc610a9","Type":"ContainerStarted","Data":"4fd87f5b6d25adeb291e3d201cbaf541da2bd334f0ef25741c61cc6cdde84fe6"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.710884 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.713679 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" podStartSLOduration=3.757414551 podStartE2EDuration="36.713659336s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.417411639 +0000 UTC m=+822.290445830" lastFinishedPulling="2026-01-29 11:13:09.373656424 +0000 UTC m=+855.246690615" observedRunningTime="2026-01-29 11:13:10.710867571 +0000 UTC m=+856.583901762" watchObservedRunningTime="2026-01-29 11:13:10.713659336 +0000 UTC m=+856.586693547" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.717503 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" event={"ID":"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073","Type":"ContainerStarted","Data":"bc4a3768fa1c9cca4812d193310cac28fcbf1805af95c04e1a9386ba634aae79"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.731151 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" event={"ID":"ea8d9bb8-bdec-453d-a308-28b962971254","Type":"ContainerStarted","Data":"4a8735b1c5a5e878884c825469cc70b09b364da8e3d7918b0de752bfddf419a3"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.731867 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.733961 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" event={"ID":"499923d8-4593-4225-bc4c-6166003a0bb3","Type":"ContainerStarted","Data":"62e93778726a1f41355dbbf7285244bf9bb1f28814e7a5be4edd90d02a79250e"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.734407 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.735730 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" event={"ID":"960bb326-dc22-43e5-bc4f-05c9ce9e26a9","Type":"ContainerStarted","Data":"d161ff8604ed6842d1b926313fb9ce28b0699c4b7ecd9d89b39cb0417ed598de"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.735757 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" event={"ID":"960bb326-dc22-43e5-bc4f-05c9ce9e26a9","Type":"ContainerStarted","Data":"a8ecc64fe66ad37e78eb694646bbd9238ebfb6f71be8ee350adf900b53337dce"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.736238 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.739028 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" event={"ID":"c5e6d3a8-d6d9-4445-9708-28b88928333e","Type":"ContainerStarted","Data":"e84cbf3484cac3ce8eddf8160f2011836e78be1faec794bd083be1721d2abcb6"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.739565 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.745502 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.746654 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" podStartSLOduration=4.339052038 podStartE2EDuration="36.746640671s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.907392395 +0000 UTC m=+822.780426586" lastFinishedPulling="2026-01-29 11:13:09.314981028 +0000 UTC m=+855.188015219" observedRunningTime="2026-01-29 11:13:10.743913128 +0000 UTC m=+856.616947319" watchObservedRunningTime="2026-01-29 11:13:10.746640671 +0000 UTC m=+856.619674862" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.747464 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" event={"ID":"734187ee-1e17-4cdc-b3bb-cfbd6e424793","Type":"ContainerStarted","Data":"6497dccb3a34f47dd9bbd0fb8434cef415eec621b65f013293f1df2be85fb4c8"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.747836 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.748729 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" event={"ID":"e35e9127-0337-4860-b938-bb477a408f1e","Type":"ContainerStarted","Data":"0626d55e873d56eda1b1771a724c1d55292071d479a657ee58d4b21362b1033f"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.749048 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.769910 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" event={"ID":"c2cda883-37e6-4c21-b320-4962ffdc98b3","Type":"ContainerStarted","Data":"be00a3caffe19975b470e0e50b2a718bbd85fb7eba28c115a53731c77e7cbe98"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.792191 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" event={"ID":"336c4e93-7d0b-4570-aafc-22e0f812db12","Type":"ContainerStarted","Data":"40d45fbb9de216994de45466c292c4b042e477acb167d5cd19427c458a4db60d"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.793016 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.803268 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" podStartSLOduration=3.384569566 podStartE2EDuration="36.803250572s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.397716279 +0000 UTC m=+822.270750470" lastFinishedPulling="2026-01-29 11:13:09.816397285 +0000 UTC m=+855.689431476" observedRunningTime="2026-01-29 11:13:10.793187212 +0000 UTC m=+856.666221403" watchObservedRunningTime="2026-01-29 11:13:10.803250572 +0000 UTC m=+856.676284763" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.810432 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" event={"ID":"b45fb247-850e-40b4-b62e-8551d55efcba","Type":"ContainerStarted","Data":"e50ab71589fb968b76137a627ecacb4e8d703634656004c9b0b230eac132891c"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.811253 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.840421 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" podStartSLOduration=10.078805015 podStartE2EDuration="36.840384919s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.456499063 +0000 UTC m=+822.329533254" lastFinishedPulling="2026-01-29 11:13:03.218078967 +0000 UTC m=+849.091113158" observedRunningTime="2026-01-29 11:13:10.834267875 +0000 UTC m=+856.707302066" watchObservedRunningTime="2026-01-29 11:13:10.840384919 +0000 UTC m=+856.713419110" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.841794 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" event={"ID":"9b88fe2c-a82a-4284-961a-8af3818815ec","Type":"ContainerStarted","Data":"3789ccba04697340b75376fc150b0baf7a2392f0058aa4ae83348b4fb42b45cf"} Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.842184 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.955832 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" podStartSLOduration=4.358213881 podStartE2EDuration="36.955792929s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.877413767 +0000 UTC m=+822.750447958" lastFinishedPulling="2026-01-29 11:13:09.474992805 +0000 UTC m=+855.348027006" observedRunningTime="2026-01-29 11:13:10.938059183 +0000 UTC m=+856.811093394" watchObservedRunningTime="2026-01-29 11:13:10.955792929 +0000 UTC m=+856.828827110" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.958263 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" podStartSLOduration=11.399390052 podStartE2EDuration="36.958241694s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:35.973768535 +0000 UTC m=+821.846802726" lastFinishedPulling="2026-01-29 11:13:01.532620187 +0000 UTC m=+847.405654368" observedRunningTime="2026-01-29 11:13:10.874905976 +0000 UTC m=+856.747940167" watchObservedRunningTime="2026-01-29 11:13:10.958241694 +0000 UTC m=+856.831275885" Jan 29 11:13:10 crc kubenswrapper[4593]: I0129 11:13:10.988221 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" podStartSLOduration=14.59799531 podStartE2EDuration="36.988191779s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:35.895864104 +0000 UTC m=+821.768898295" lastFinishedPulling="2026-01-29 11:12:58.286060573 +0000 UTC m=+844.159094764" observedRunningTime="2026-01-29 11:13:10.983856503 +0000 UTC m=+856.856890704" watchObservedRunningTime="2026-01-29 11:13:10.988191779 +0000 UTC m=+856.861225970" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.022104 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" podStartSLOduration=4.593918393 podStartE2EDuration="37.02208864s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.886728639 +0000 UTC m=+822.759762830" lastFinishedPulling="2026-01-29 11:13:09.314898866 +0000 UTC m=+855.187933077" observedRunningTime="2026-01-29 11:13:11.019074679 +0000 UTC m=+856.892108890" watchObservedRunningTime="2026-01-29 11:13:11.02208864 +0000 UTC m=+856.895122831" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.154445 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" podStartSLOduration=36.154424284 podStartE2EDuration="36.154424284s" podCreationTimestamp="2026-01-29 11:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:13:11.149997905 +0000 UTC m=+857.023032096" watchObservedRunningTime="2026-01-29 11:13:11.154424284 +0000 UTC m=+857.027458475" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.212294 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" podStartSLOduration=4.319242477 podStartE2EDuration="37.212274478s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.421864945 +0000 UTC m=+822.294899136" lastFinishedPulling="2026-01-29 11:13:09.314896936 +0000 UTC m=+855.187931137" observedRunningTime="2026-01-29 11:13:11.194324536 +0000 UTC m=+857.067358737" watchObservedRunningTime="2026-01-29 11:13:11.212274478 +0000 UTC m=+857.085308659" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.221523 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" podStartSLOduration=8.418551775 podStartE2EDuration="37.221493295s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.775804753 +0000 UTC m=+822.648838944" lastFinishedPulling="2026-01-29 11:13:05.578746273 +0000 UTC m=+851.451780464" observedRunningTime="2026-01-29 11:13:11.219752768 +0000 UTC m=+857.092786959" watchObservedRunningTime="2026-01-29 11:13:11.221493295 +0000 UTC m=+857.094527506" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.263256 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" podStartSLOduration=14.828606506 podStartE2EDuration="37.263237446s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:35.852223733 +0000 UTC m=+821.725257924" lastFinishedPulling="2026-01-29 11:12:58.286854673 +0000 UTC m=+844.159888864" observedRunningTime="2026-01-29 11:13:11.261315185 +0000 UTC m=+857.134349376" watchObservedRunningTime="2026-01-29 11:13:11.263237446 +0000 UTC m=+857.136271637" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.856617 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" event={"ID":"812ebcfb-766d-4a1b-aaaa-2dd5a96ce047","Type":"ContainerStarted","Data":"66670e17430983198f3bd51333458e98de5166755abe9118d08fca861d9f73b7"} Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.937613 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" podStartSLOduration=5.393157804 podStartE2EDuration="37.9375896s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.862266415 +0000 UTC m=+822.735300606" lastFinishedPulling="2026-01-29 11:13:09.406698191 +0000 UTC m=+855.279732402" observedRunningTime="2026-01-29 11:13:11.356299457 +0000 UTC m=+857.229333668" watchObservedRunningTime="2026-01-29 11:13:11.9375896 +0000 UTC m=+857.810623791" Jan 29 11:13:11 crc kubenswrapper[4593]: I0129 11:13:11.941557 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" podStartSLOduration=3.79699271 podStartE2EDuration="37.941543056s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.411301511 +0000 UTC m=+822.284335702" lastFinishedPulling="2026-01-29 11:13:10.555851857 +0000 UTC m=+856.428886048" observedRunningTime="2026-01-29 11:13:11.936959082 +0000 UTC m=+857.809993273" watchObservedRunningTime="2026-01-29 11:13:11.941543056 +0000 UTC m=+857.814577247" Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.890173 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" event={"ID":"2c7ec826-43f0-49f3-9d96-4330427e4ed9","Type":"ContainerStarted","Data":"72c71f46f45bc9200a61f2fe96a5e57792c486fd79edd6edbac1fac91ec38878"} Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.890960 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.899606 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" event={"ID":"0e86fa54-1e41-4bb9-86c7-a0ea0d919270","Type":"ContainerStarted","Data":"a7d6ee1831a5c14518a71cc9f80893decec79f51ac3109b12c6a77aa6c923b6e"} Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.899815 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.916135 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" podStartSLOduration=4.237500918 podStartE2EDuration="39.916117162s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.879840651 +0000 UTC m=+822.752874842" lastFinishedPulling="2026-01-29 11:13:12.558456895 +0000 UTC m=+858.431491086" observedRunningTime="2026-01-29 11:13:13.914714114 +0000 UTC m=+859.787748305" watchObservedRunningTime="2026-01-29 11:13:13.916117162 +0000 UTC m=+859.789151353" Jan 29 11:13:13 crc kubenswrapper[4593]: I0129 11:13:13.930889 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" podStartSLOduration=3.163524962 podStartE2EDuration="39.930876078s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.780519325 +0000 UTC m=+822.653553516" lastFinishedPulling="2026-01-29 11:13:13.547870441 +0000 UTC m=+859.420904632" observedRunningTime="2026-01-29 11:13:13.928434873 +0000 UTC m=+859.801469064" watchObservedRunningTime="2026-01-29 11:13:13.930876078 +0000 UTC m=+859.803910269" Jan 29 11:13:14 crc kubenswrapper[4593]: I0129 11:13:14.622535 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-7hmqc" Jan 29 11:13:14 crc kubenswrapper[4593]: I0129 11:13:14.640999 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-7ns7q" Jan 29 11:13:14 crc kubenswrapper[4593]: I0129 11:13:14.697973 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-xw2pz" Jan 29 11:13:14 crc kubenswrapper[4593]: I0129 11:13:14.891025 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-98l2v" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.063650 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.067687 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-t584q" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.187287 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-c89cq" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.260287 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-qt87l" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.456282 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-885pn" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.638973 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-8kf6p" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.701088 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ltfr4" Jan 29 11:13:15 crc kubenswrapper[4593]: I0129 11:13:15.743240 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-z4mp8" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.929173 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" event={"ID":"cdb96936-cd34-44fd-94b5-5570688fdfe6","Type":"ContainerStarted","Data":"19de6d55484fcb2fd18981d647ca6de6a0f6695bc25dac585e66cef31e3a2d98"} Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.929582 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.931301 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" event={"ID":"f6e2fc57-0cce-4f5a-bf3e-63efbfff1073","Type":"ContainerStarted","Data":"bf62fe720cc32b4683be192add558948198dd806971fc03e3e3a34ed038e5ee7"} Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.931442 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.932497 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" event={"ID":"c2cda883-37e6-4c21-b320-4962ffdc98b3","Type":"ContainerStarted","Data":"2103598935a9d72d9150d67bbadf9ad2c574b7c2f0779f0d44481950669ede18"} Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.932605 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.944406 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" podStartSLOduration=2.834155317 podStartE2EDuration="42.94438826s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.467905229 +0000 UTC m=+822.340939420" lastFinishedPulling="2026-01-29 11:13:16.578138172 +0000 UTC m=+862.451172363" observedRunningTime="2026-01-29 11:13:16.942764246 +0000 UTC m=+862.815798447" watchObservedRunningTime="2026-01-29 11:13:16.94438826 +0000 UTC m=+862.817422451" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.975226 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" podStartSLOduration=36.381714779 podStartE2EDuration="42.975201787s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:13:09.978122029 +0000 UTC m=+855.851156220" lastFinishedPulling="2026-01-29 11:13:16.571609027 +0000 UTC m=+862.444643228" observedRunningTime="2026-01-29 11:13:16.970900492 +0000 UTC m=+862.843934693" watchObservedRunningTime="2026-01-29 11:13:16.975201787 +0000 UTC m=+862.848235978" Jan 29 11:13:16 crc kubenswrapper[4593]: I0129 11:13:16.997206 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" podStartSLOduration=36.159620785 podStartE2EDuration="42.997182958s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:13:09.739465539 +0000 UTC m=+855.612499730" lastFinishedPulling="2026-01-29 11:13:16.577027712 +0000 UTC m=+862.450061903" observedRunningTime="2026-01-29 11:13:16.99132105 +0000 UTC m=+862.864355251" watchObservedRunningTime="2026-01-29 11:13:16.997182958 +0000 UTC m=+862.870217149" Jan 29 11:13:17 crc kubenswrapper[4593]: I0129 11:13:17.559114 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" Jan 29 11:13:18 crc kubenswrapper[4593]: I0129 11:13:18.945432 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" event={"ID":"62efedcb-a194-4692-8e84-a0da7777a400","Type":"ContainerStarted","Data":"6a7a3b4edc11f928639449a1f7d706a8d8c95e7f9b476367bd5168246fc8526e"} Jan 29 11:13:18 crc kubenswrapper[4593]: I0129 11:13:18.946669 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:13:18 crc kubenswrapper[4593]: I0129 11:13:18.967526 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" podStartSLOduration=3.305937832 podStartE2EDuration="44.967503839s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.861117895 +0000 UTC m=+822.734152086" lastFinishedPulling="2026-01-29 11:13:18.522683882 +0000 UTC m=+864.395718093" observedRunningTime="2026-01-29 11:13:18.963249195 +0000 UTC m=+864.836283386" watchObservedRunningTime="2026-01-29 11:13:18.967503839 +0000 UTC m=+864.840538020" Jan 29 11:13:19 crc kubenswrapper[4593]: E0129 11:13:19.078085 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podUID="0259a320-8da9-48e5-8d73-25b09774d9c1" Jan 29 11:13:20 crc kubenswrapper[4593]: E0129 11:13:20.077450 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podUID="2f32633b-0490-4885-9543-a140c807c742" Jan 29 11:13:20 crc kubenswrapper[4593]: E0129 11:13:20.077585 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podUID="ba6fb45a-59ff-42ee-acb0-0ee43d001e1e" Jan 29 11:13:24 crc kubenswrapper[4593]: I0129 11:13:24.716937 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2ml7m" Jan 29 11:13:24 crc kubenswrapper[4593]: I0129 11:13:24.757565 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-xqcrc" Jan 29 11:13:25 crc kubenswrapper[4593]: I0129 11:13:25.148103 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-xf5fn" Jan 29 11:13:25 crc kubenswrapper[4593]: I0129 11:13:25.259305 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-zx6r8" Jan 29 11:13:25 crc kubenswrapper[4593]: I0129 11:13:25.481571 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-kttv8" Jan 29 11:13:25 crc kubenswrapper[4593]: I0129 11:13:25.825185 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-k4b7q" Jan 29 11:13:26 crc kubenswrapper[4593]: I0129 11:13:26.754536 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-6zkvt" Jan 29 11:13:27 crc kubenswrapper[4593]: I0129 11:13:27.239371 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb" Jan 29 11:13:32 crc kubenswrapper[4593]: I0129 11:13:32.076298 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:13:33 crc kubenswrapper[4593]: I0129 11:13:33.041485 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" event={"ID":"0259a320-8da9-48e5-8d73-25b09774d9c1","Type":"ContainerStarted","Data":"40e1fde520d3392e4c75be969974c783b32b945e8bc13323204eaa9722384e5e"} Jan 29 11:13:33 crc kubenswrapper[4593]: I0129 11:13:33.042022 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:13:33 crc kubenswrapper[4593]: I0129 11:13:33.066369 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" podStartSLOduration=2.429511901 podStartE2EDuration="58.066349527s" podCreationTimestamp="2026-01-29 11:12:35 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.90487249 +0000 UTC m=+822.777906681" lastFinishedPulling="2026-01-29 11:13:32.541710116 +0000 UTC m=+878.414744307" observedRunningTime="2026-01-29 11:13:33.057809638 +0000 UTC m=+878.930843829" watchObservedRunningTime="2026-01-29 11:13:33.066349527 +0000 UTC m=+878.939383718" Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.061423 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" event={"ID":"2f32633b-0490-4885-9543-a140c807c742","Type":"ContainerStarted","Data":"cb9a81743cd483803fa0d10904e0bfe6026c9c670e8a251a6150438a487d91de"} Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.063534 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" event={"ID":"ba6fb45a-59ff-42ee-acb0-0ee43d001e1e","Type":"ContainerStarted","Data":"274529be6a5c28dc3c29f2a5e2ea7263a379e80db25fab52d7a0f10d147c8dd4"} Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.064086 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.112160 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tfkk2" podStartSLOduration=2.380402156 podStartE2EDuration="1m1.112143997s" podCreationTimestamp="2026-01-29 11:12:35 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.912244881 +0000 UTC m=+822.785279072" lastFinishedPulling="2026-01-29 11:13:35.643986722 +0000 UTC m=+881.517020913" observedRunningTime="2026-01-29 11:13:36.086346394 +0000 UTC m=+881.959380585" watchObservedRunningTime="2026-01-29 11:13:36.112143997 +0000 UTC m=+881.985178178" Jan 29 11:13:36 crc kubenswrapper[4593]: I0129 11:13:36.114984 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" podStartSLOduration=3.377289888 podStartE2EDuration="1m2.114976324s" podCreationTimestamp="2026-01-29 11:12:34 +0000 UTC" firstStartedPulling="2026-01-29 11:12:36.907361215 +0000 UTC m=+822.780395406" lastFinishedPulling="2026-01-29 11:13:35.645047651 +0000 UTC m=+881.518081842" observedRunningTime="2026-01-29 11:13:36.108761806 +0000 UTC m=+881.981796017" watchObservedRunningTime="2026-01-29 11:13:36.114976324 +0000 UTC m=+881.988010515" Jan 29 11:13:45 crc kubenswrapper[4593]: I0129 11:13:45.631468 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-9dbds" Jan 29 11:13:46 crc kubenswrapper[4593]: I0129 11:13:46.022856 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-zmssx" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.330823 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.332993 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.337613 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fhqs4" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.337895 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.337969 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.345147 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.354875 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.365351 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.365410 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.446742 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.452690 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.456129 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.468586 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.469204 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.469255 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.470095 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.517363 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") pod \"dnsmasq-dns-675f4bcbfc-t52gk\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.570477 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.570575 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.570845 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.665095 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.672181 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.672409 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.672969 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.673248 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.673348 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.705472 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") pod \"dnsmasq-dns-78dd6ddcc-swvvt\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:01 crc kubenswrapper[4593]: I0129 11:14:01.773258 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:02 crc kubenswrapper[4593]: I0129 11:14:02.038496 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:02 crc kubenswrapper[4593]: I0129 11:14:02.158263 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:02 crc kubenswrapper[4593]: W0129 11:14:02.164216 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb705d0db_8509_4a63_9f5a_87976d741ebc.slice/crio-c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855 WatchSource:0}: Error finding container c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855: Status 404 returned error can't find the container with id c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855 Jan 29 11:14:02 crc kubenswrapper[4593]: I0129 11:14:02.445185 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" event={"ID":"3616718a-e7ca-4045-941b-4109f08f4989","Type":"ContainerStarted","Data":"57892c814f48ce6859a27a763582b6a66ed12dadc0f9828ee1126b0622d692ee"} Jan 29 11:14:02 crc kubenswrapper[4593]: I0129 11:14:02.446567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" event={"ID":"b705d0db-8509-4a63-9f5a-87976d741ebc","Type":"ContainerStarted","Data":"c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855"} Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.253679 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.294255 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.298165 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.301434 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.424128 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.424176 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.424202 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.525943 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.525997 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.526022 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.526936 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.526944 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.553845 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") pod \"dnsmasq-dns-666b6646f7-bvbjq\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.631246 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.660319 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.694019 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.695119 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.752393 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.839415 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.839800 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.839824 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.940649 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.940696 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.940744 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.941650 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.941871 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:04 crc kubenswrapper[4593]: I0129 11:14:04.970336 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") pod \"dnsmasq-dns-57d769cc4f-4mvwn\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.080921 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.314612 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:14:05 crc kubenswrapper[4593]: W0129 11:14:05.339558 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e7df070_9e8b_4e24_ac24_4593ef89aca9.slice/crio-565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e WatchSource:0}: Error finding container 565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e: Status 404 returned error can't find the container with id 565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.467918 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.470781 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477054 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477306 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477478 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477568 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ck876" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477670 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.477700 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.481837 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.496464 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.510409 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerStarted","Data":"565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e"} Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551574 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551619 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551680 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551697 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551716 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551738 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551754 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551786 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551814 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551843 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.551870 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654440 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654507 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654543 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654576 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654607 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654626 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654676 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654701 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654762 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654800 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654949 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.654995 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.655825 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.655897 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.656946 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.657174 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.662248 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.668175 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.668984 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.669846 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.678462 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.683971 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.814767 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.819044 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.910785 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.912000 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.916749 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.916964 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.916982 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.917132 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ztnqn" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.917204 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.917248 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.917380 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.927411 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.959959 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960240 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960323 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960420 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960501 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960569 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960710 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960815 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.960936 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.961043 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:05 crc kubenswrapper[4593]: I0129 11:14:05.961131 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.065584 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066017 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066074 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066103 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066271 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066321 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066343 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066400 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066430 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066453 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.066488 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.067698 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.067981 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.068589 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.069321 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.070345 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.075312 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.092168 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.096600 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.107908 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.135226 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.136458 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.139056 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.258556 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.553565 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.568310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" event={"ID":"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b","Type":"ContainerStarted","Data":"007a02e651669e8d70d7d24081e75b51bae9e37c2bf6d5643b4ba609d3b0011b"} Jan 29 11:14:06 crc kubenswrapper[4593]: W0129 11:14:06.639425 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f6d0a4_2543_4de8_a64e_f3ce4c2bb11e.slice/crio-5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090 WatchSource:0}: Error finding container 5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090: Status 404 returned error can't find the container with id 5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090 Jan 29 11:14:06 crc kubenswrapper[4593]: I0129 11:14:06.920107 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.118650 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.154060 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.154219 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.159188 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-qjhkm" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.159498 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.160262 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.168893 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.181841 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310050 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310096 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-kolla-config\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310157 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310203 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310225 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6674f537-f800-4b05-912c-b2671e504c17-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310247 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310268 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-config-data-default\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.310301 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjf25\" (UniqueName: \"kubernetes.io/projected/6674f537-f800-4b05-912c-b2671e504c17-kube-api-access-jjf25\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.411845 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-config-data-default\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.411908 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjf25\" (UniqueName: \"kubernetes.io/projected/6674f537-f800-4b05-912c-b2671e504c17-kube-api-access-jjf25\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.411953 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.411972 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-kolla-config\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.412008 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.412039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.412060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6674f537-f800-4b05-912c-b2671e504c17-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.412080 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.413823 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.414396 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-config-data-default\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.414643 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6674f537-f800-4b05-912c-b2671e504c17-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.415174 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.415277 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6674f537-f800-4b05-912c-b2671e504c17-kolla-config\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.442876 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.448336 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6674f537-f800-4b05-912c-b2671e504c17-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.474242 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjf25\" (UniqueName: \"kubernetes.io/projected/6674f537-f800-4b05-912c-b2671e504c17-kube-api-access-jjf25\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.488969 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"6674f537-f800-4b05-912c-b2671e504c17\") " pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.521232 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.589343 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerStarted","Data":"5a494b5365040c8bc0ddefc581e932c4375131be0145147547aba83d5a596b24"} Jan 29 11:14:07 crc kubenswrapper[4593]: I0129 11:14:07.593972 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerStarted","Data":"5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090"} Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.274395 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.276410 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.281728 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.281973 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-fdlz9" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.282226 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.282337 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.310676 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441504 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1755998-9149-49be-b10f-c4fe029728bc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441544 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ndsc\" (UniqueName: \"kubernetes.io/projected/c1755998-9149-49be-b10f-c4fe029728bc-kube-api-access-7ndsc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441580 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441601 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441643 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441669 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441700 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.441736 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545392 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545483 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1755998-9149-49be-b10f-c4fe029728bc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545508 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ndsc\" (UniqueName: \"kubernetes.io/projected/c1755998-9149-49be-b10f-c4fe029728bc-kube-api-access-7ndsc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545541 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545567 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545598 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545646 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.545679 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.547192 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.547840 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.548130 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1755998-9149-49be-b10f-c4fe029728bc-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.548312 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.555616 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.556609 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1755998-9149-49be-b10f-c4fe029728bc-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.579558 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ndsc\" (UniqueName: \"kubernetes.io/projected/c1755998-9149-49be-b10f-c4fe029728bc-kube-api-access-7ndsc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.614604 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.615569 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.618882 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1755998-9149-49be-b10f-c4fe029728bc-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.619460 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.619549 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-m6vm2" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.621204 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c1755998-9149-49be-b10f-c4fe029728bc\") " pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.623942 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.629020 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.655754 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.751899 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kolla-config\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.751964 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-config-data\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.752022 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.752061 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8wt\" (UniqueName: \"kubernetes.io/projected/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kube-api-access-dr8wt\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.752081 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.852963 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.853043 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr8wt\" (UniqueName: \"kubernetes.io/projected/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kube-api-access-dr8wt\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.853076 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.853107 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kolla-config\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.853158 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-config-data\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.854141 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-config-data\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.865718 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kolla-config\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.871588 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-combined-ca-bundle\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.872036 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-memcached-tls-certs\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.882315 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr8wt\" (UniqueName: \"kubernetes.io/projected/dc6f5a6c-3bf0-4f78-89f3-1e2683a37958-kube-api-access-dr8wt\") pod \"memcached-0\" (UID: \"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958\") " pod="openstack/memcached-0" Jan 29 11:14:08 crc kubenswrapper[4593]: I0129 11:14:08.912502 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.002664 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.686981 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6674f537-f800-4b05-912c-b2671e504c17","Type":"ContainerStarted","Data":"ce5363c18f79bb9c1f08e89717105847da3abd6525a9cd16fe23e08aae5ac420"} Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.732348 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.774363 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.941885 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.943709 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.969301 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.993478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.993567 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:09 crc kubenswrapper[4593]: I0129 11:14:09.993617 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.095286 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.095343 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.095381 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.095926 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.096198 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.141730 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") pod \"community-operators-5zjts\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.310681 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.542360 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.543221 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.578691 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h5q6w" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.580774 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.607822 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") pod \"kube-state-metrics-0\" (UID: \"1512a75d-a403-420b-a9be-f931faf1273a\") " pod="openstack/kube-state-metrics-0" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.708764 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") pod \"kube-state-metrics-0\" (UID: \"1512a75d-a403-420b-a9be-f931faf1273a\") " pod="openstack/kube-state-metrics-0" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.719656 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1755998-9149-49be-b10f-c4fe029728bc","Type":"ContainerStarted","Data":"1170cf8324ef1a48f8a2b560460beca35748d70260701349c0c3a1810b1b114d"} Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.732626 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958","Type":"ContainerStarted","Data":"a9d15fd64111c3152bb3aed188baeb95bb13f70e61a520ab6fb744a75ae37941"} Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.768563 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") pod \"kube-state-metrics-0\" (UID: \"1512a75d-a403-420b-a9be-f931faf1273a\") " pod="openstack/kube-state-metrics-0" Jan 29 11:14:10 crc kubenswrapper[4593]: I0129 11:14:10.905832 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:14:11 crc kubenswrapper[4593]: I0129 11:14:11.320678 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:14:11 crc kubenswrapper[4593]: I0129 11:14:11.749803 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerStarted","Data":"ade31aca7ba29e2371128a860beb89fe80c8c2fbd7528ceac5d2035097f7e6ad"} Jan 29 11:14:11 crc kubenswrapper[4593]: I0129 11:14:11.837871 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:14:11 crc kubenswrapper[4593]: W0129 11:14:11.923764 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1512a75d_a403_420b_a9be_f931faf1273a.slice/crio-a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2 WatchSource:0}: Error finding container a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2: Status 404 returned error can't find the container with id a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2 Jan 29 11:14:12 crc kubenswrapper[4593]: I0129 11:14:12.771840 4593 generic.go:334] "Generic (PLEG): container finished" podID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerID="88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351" exitCode=0 Jan 29 11:14:12 crc kubenswrapper[4593]: I0129 11:14:12.772224 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerDied","Data":"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351"} Jan 29 11:14:12 crc kubenswrapper[4593]: I0129 11:14:12.783784 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1512a75d-a403-420b-a9be-f931faf1273a","Type":"ContainerStarted","Data":"a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2"} Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.132752 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.141270 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.151342 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.155538 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.155720 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.155838 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.155932 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.156023 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-j49bx" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196756 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdd4\" (UniqueName: \"kubernetes.io/projected/fd9a4c00-318d-4bd1-85cb-40971234c3cd-kube-api-access-vwdd4\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196811 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196834 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196897 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196926 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.196979 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.197021 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.197066 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306487 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306554 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306587 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdd4\" (UniqueName: \"kubernetes.io/projected/fd9a4c00-318d-4bd1-85cb-40971234c3cd-kube-api-access-vwdd4\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306611 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306651 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306690 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306708 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.306739 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.307394 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.308100 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.311483 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.313530 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9a4c00-318d-4bd1-85cb-40971234c3cd-config\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.338200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.360713 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.389597 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9a4c00-318d-4bd1-85cb-40971234c3cd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.392208 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.394013 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdd4\" (UniqueName: \"kubernetes.io/projected/fd9a4c00-318d-4bd1-85cb-40971234c3cd-kube-api-access-vwdd4\") pod \"ovsdbserver-nb-0\" (UID: \"fd9a4c00-318d-4bd1-85cb-40971234c3cd\") " pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:14 crc kubenswrapper[4593]: I0129 11:14:14.514372 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.312693 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cc9qq"] Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.314085 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.320262 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.320308 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7bnzl" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.322117 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cc9qq"] Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.322321 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.423843 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-x49lj"] Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.425671 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443046 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-combined-ca-bundle\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443115 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443147 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443192 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-ovn-controller-tls-certs\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443232 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lj78\" (UniqueName: \"kubernetes.io/projected/df5842a4-132b-4c71-a970-efe4f00a3827-kube-api-access-2lj78\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443268 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df5842a4-132b-4c71-a970-efe4f00a3827-scripts\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.443322 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-log-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.456865 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-x49lj"] Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.544827 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-etc-ovs\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.544883 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-ovn-controller-tls-certs\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.544926 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6b7g\" (UniqueName: \"kubernetes.io/projected/22811af4-f063-480b-81b2-6c09b6526fea-kube-api-access-k6b7g\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.545060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lj78\" (UniqueName: \"kubernetes.io/projected/df5842a4-132b-4c71-a970-efe4f00a3827-kube-api-access-2lj78\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.545083 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-lib\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.548414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df5842a4-132b-4c71-a970-efe4f00a3827-scripts\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549026 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df5842a4-132b-4c71-a970-efe4f00a3827-scripts\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549116 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-run\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549175 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-log\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549244 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-log-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549324 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-combined-ca-bundle\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549373 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549409 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22811af4-f063-480b-81b2-6c09b6526fea-scripts\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.549455 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.550197 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-log-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.550313 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.553853 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/df5842a4-132b-4c71-a970-efe4f00a3827-var-run-ovn\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.558899 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-ovn-controller-tls-certs\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.559032 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5842a4-132b-4c71-a970-efe4f00a3827-combined-ca-bundle\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.566804 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lj78\" (UniqueName: \"kubernetes.io/projected/df5842a4-132b-4c71-a970-efe4f00a3827-kube-api-access-2lj78\") pod \"ovn-controller-cc9qq\" (UID: \"df5842a4-132b-4c71-a970-efe4f00a3827\") " pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652047 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22811af4-f063-480b-81b2-6c09b6526fea-scripts\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652401 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-etc-ovs\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652431 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6b7g\" (UniqueName: \"kubernetes.io/projected/22811af4-f063-480b-81b2-6c09b6526fea-kube-api-access-k6b7g\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652466 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-lib\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652497 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-run\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652517 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-log\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.652761 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-log\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.655088 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22811af4-f063-480b-81b2-6c09b6526fea-scripts\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.655273 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-etc-ovs\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.655744 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-lib\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.655870 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/22811af4-f063-480b-81b2-6c09b6526fea-var-run\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.671687 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6b7g\" (UniqueName: \"kubernetes.io/projected/22811af4-f063-480b-81b2-6c09b6526fea-kube-api-access-k6b7g\") pod \"ovn-controller-ovs-x49lj\" (UID: \"22811af4-f063-480b-81b2-6c09b6526fea\") " pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.673136 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:15 crc kubenswrapper[4593]: I0129 11:14:15.753270 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.235009 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.239107 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.242948 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.244688 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.244873 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-5ddd6" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.245036 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.245254 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.327923 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.327986 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328303 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328349 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328463 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328493 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328515 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.328541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5rlg\" (UniqueName: \"kubernetes.io/projected/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-kube-api-access-l5rlg\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.429905 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.429960 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430017 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430043 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430063 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430086 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5rlg\" (UniqueName: \"kubernetes.io/projected/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-kube-api-access-l5rlg\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430144 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.430168 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.431330 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.431719 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.446067 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.446502 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.447653 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.453448 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.454479 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.463318 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.487000 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5rlg\" (UniqueName: \"kubernetes.io/projected/c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9-kube-api-access-l5rlg\") pod \"ovsdbserver-sb-0\" (UID: \"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9\") " pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:18 crc kubenswrapper[4593]: I0129 11:14:18.575220 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:38 crc kubenswrapper[4593]: E0129 11:14:38.552576 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 29 11:14:38 crc kubenswrapper[4593]: E0129 11:14:38.553494 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjf25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(6674f537-f800-4b05-912c-b2671e504c17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:38 crc kubenswrapper[4593]: E0129 11:14:38.554627 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="6674f537-f800-4b05-912c-b2671e504c17" Jan 29 11:14:39 crc kubenswrapper[4593]: E0129 11:14:39.124236 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="6674f537-f800-4b05-912c-b2671e504c17" Jan 29 11:14:39 crc kubenswrapper[4593]: E0129 11:14:39.342916 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 29 11:14:39 crc kubenswrapper[4593]: E0129 11:14:39.343165 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n565h689h686h97h565h58dh64h67bh647h5f4h97h555h684h574h657h7bh655h6fhcbh5cfhcfh546h7fh5c8h676h684hbbh568h54fhc7h5cbh574q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(dc6f5a6c-3bf0-4f78-89f3-1e2683a37958): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:39 crc kubenswrapper[4593]: E0129 11:14:39.344436 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="dc6f5a6c-3bf0-4f78-89f3-1e2683a37958" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.131922 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="dc6f5a6c-3bf0-4f78-89f3-1e2683a37958" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.784178 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.784675 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pqcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-bvbjq_openstack(7e7df070-9e8b-4e24-ac24-4593ef89aca9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.785949 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.812469 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.815820 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr9cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-4mvwn_openstack(4f968f6f-3c5b-4e45-baf2-cf20ac696d9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.817265 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.935359 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.935543 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w69lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-t52gk_openstack(3616718a-e7ca-4045-941b-4109f08f4989): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.936743 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" podUID="3616718a-e7ca-4045-941b-4109f08f4989" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.938820 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.939350 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rc96n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-swvvt_openstack(b705d0db-8509-4a63-9f5a-87976d741ebc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:14:40 crc kubenswrapper[4593]: E0129 11:14:40.940703 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" podUID="b705d0db-8509-4a63-9f5a-87976d741ebc" Jan 29 11:14:41 crc kubenswrapper[4593]: E0129 11:14:41.141568 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" Jan 29 11:14:41 crc kubenswrapper[4593]: E0129 11:14:41.141559 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" Jan 29 11:14:41 crc kubenswrapper[4593]: I0129 11:14:41.473272 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cc9qq"] Jan 29 11:14:41 crc kubenswrapper[4593]: W0129 11:14:41.525831 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf5842a4_132b_4c71_a970_efe4f00a3827.slice/crio-0c9d039339f5d04afbc173d87115effc674ad126948f9242d14888fc390bafc0 WatchSource:0}: Error finding container 0c9d039339f5d04afbc173d87115effc674ad126948f9242d14888fc390bafc0: Status 404 returned error can't find the container with id 0c9d039339f5d04afbc173d87115effc674ad126948f9242d14888fc390bafc0 Jan 29 11:14:41 crc kubenswrapper[4593]: I0129 11:14:41.980268 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:41 crc kubenswrapper[4593]: I0129 11:14:41.989285 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.030981 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091244 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") pod \"3616718a-e7ca-4045-941b-4109f08f4989\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091350 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") pod \"b705d0db-8509-4a63-9f5a-87976d741ebc\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091443 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") pod \"b705d0db-8509-4a63-9f5a-87976d741ebc\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091476 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") pod \"b705d0db-8509-4a63-9f5a-87976d741ebc\" (UID: \"b705d0db-8509-4a63-9f5a-87976d741ebc\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.091524 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") pod \"3616718a-e7ca-4045-941b-4109f08f4989\" (UID: \"3616718a-e7ca-4045-941b-4109f08f4989\") " Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.092274 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b705d0db-8509-4a63-9f5a-87976d741ebc" (UID: "b705d0db-8509-4a63-9f5a-87976d741ebc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.093324 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config" (OuterVolumeSpecName: "config") pod "3616718a-e7ca-4045-941b-4109f08f4989" (UID: "3616718a-e7ca-4045-941b-4109f08f4989"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.093944 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config" (OuterVolumeSpecName: "config") pod "b705d0db-8509-4a63-9f5a-87976d741ebc" (UID: "b705d0db-8509-4a63-9f5a-87976d741ebc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.099093 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n" (OuterVolumeSpecName: "kube-api-access-rc96n") pod "b705d0db-8509-4a63-9f5a-87976d741ebc" (UID: "b705d0db-8509-4a63-9f5a-87976d741ebc"). InnerVolumeSpecName "kube-api-access-rc96n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.100836 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr" (OuterVolumeSpecName: "kube-api-access-w69lr") pod "3616718a-e7ca-4045-941b-4109f08f4989" (UID: "3616718a-e7ca-4045-941b-4109f08f4989"). InnerVolumeSpecName "kube-api-access-w69lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.147965 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq" event={"ID":"df5842a4-132b-4c71-a970-efe4f00a3827","Type":"ContainerStarted","Data":"0c9d039339f5d04afbc173d87115effc674ad126948f9242d14888fc390bafc0"} Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.149575 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" event={"ID":"b705d0db-8509-4a63-9f5a-87976d741ebc","Type":"ContainerDied","Data":"c6b4f9ad5f9e175b3ecf71d1aa97e66d43ecb6c79e5698c17d617486827b1855"} Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.149682 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-swvvt" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.150888 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.150911 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-t52gk" event={"ID":"3616718a-e7ca-4045-941b-4109f08f4989","Type":"ContainerDied","Data":"57892c814f48ce6859a27a763582b6a66ed12dadc0f9828ee1126b0622d692ee"} Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.158725 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9","Type":"ContainerStarted","Data":"8f99ebe56fbf1f5e33ea94183a28c9a507bc72a80c370d988abc16f526b76566"} Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203365 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3616718a-e7ca-4045-941b-4109f08f4989-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203397 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203407 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b705d0db-8509-4a63-9f5a-87976d741ebc-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203418 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc96n\" (UniqueName: \"kubernetes.io/projected/b705d0db-8509-4a63-9f5a-87976d741ebc-kube-api-access-rc96n\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.203430 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w69lr\" (UniqueName: \"kubernetes.io/projected/3616718a-e7ca-4045-941b-4109f08f4989-kube-api-access-w69lr\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.225128 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.232468 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-swvvt"] Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.250299 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:42 crc kubenswrapper[4593]: I0129 11:14:42.256548 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-t52gk"] Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.095040 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3616718a-e7ca-4045-941b-4109f08f4989" path="/var/lib/kubelet/pods/3616718a-e7ca-4045-941b-4109f08f4989/volumes" Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.098494 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b705d0db-8509-4a63-9f5a-87976d741ebc" path="/var/lib/kubelet/pods/b705d0db-8509-4a63-9f5a-87976d741ebc/volumes" Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.114331 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-x49lj"] Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.184011 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerStarted","Data":"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96"} Jan 29 11:14:43 crc kubenswrapper[4593]: I0129 11:14:43.187111 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 29 11:14:43 crc kubenswrapper[4593]: E0129 11:14:43.372782 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 29 11:14:43 crc kubenswrapper[4593]: E0129 11:14:43.372827 4593 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 29 11:14:43 crc kubenswrapper[4593]: E0129 11:14:43.372944 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fsks2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(1512a75d-a403-420b-a9be-f931faf1273a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 11:14:43 crc kubenswrapper[4593]: E0129 11:14:43.374055 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="1512a75d-a403-420b-a9be-f931faf1273a" Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.193838 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerStarted","Data":"44978dbad6338f76a863bda910ccc44233b86b74e07d252f43136dd31d7cd624"} Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.284807 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1755998-9149-49be-b10f-c4fe029728bc","Type":"ContainerStarted","Data":"97aa67ebfa2393a610a45c308a8a4b80642d7f74a23d7c02feada231615c7809"} Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.298618 4593 generic.go:334] "Generic (PLEG): container finished" podID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerID="9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96" exitCode=0 Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.298717 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerDied","Data":"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96"} Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.313304 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fd9a4c00-318d-4bd1-85cb-40971234c3cd","Type":"ContainerStarted","Data":"10d04c87a12a3428710a9a6993e86d098b950d8e64c13eb6b4ff4ac35bdcab88"} Jan 29 11:14:44 crc kubenswrapper[4593]: I0129 11:14:44.318230 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerStarted","Data":"538f749b613307642b44350e64b6cb037231a6b310457aa5fea6c9ebf1ae7b87"} Jan 29 11:14:44 crc kubenswrapper[4593]: E0129 11:14:44.323978 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="1512a75d-a403-420b-a9be-f931faf1273a" Jan 29 11:14:45 crc kubenswrapper[4593]: I0129 11:14:45.328552 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerStarted","Data":"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f"} Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.852320 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.854814 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.935966 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.994170 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.994403 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:47 crc kubenswrapper[4593]: I0129 11:14:47.994550 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.096506 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.096568 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.096656 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.097382 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.097497 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.120472 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") pod \"redhat-marketplace-hnrxg\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.289942 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:48 crc kubenswrapper[4593]: I0129 11:14:48.751018 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.516528 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9","Type":"ContainerStarted","Data":"81570d092c1390e4d61bb8c50f70df099d79d6c5e0a359f15dc0834bd3f5d521"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.521207 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerStarted","Data":"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.526154 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fd9a4c00-318d-4bd1-85cb-40971234c3cd","Type":"ContainerStarted","Data":"3e9d5e0cbc4c1824dbe8de6c8b250af90d4e69ec8502da730733af3378cd013c"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.529049 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq" event={"ID":"df5842a4-132b-4c71-a970-efe4f00a3827","Type":"ContainerStarted","Data":"2cd0fa74c869ba6fc2b7b790ba76246c66b68dcb192a193bd1f6cb04700e2a57"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.530936 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerStarted","Data":"3c35f96b9e6d360871a4363b31c9b97c03bf9c434960bc17aed93f232b0ef3da"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.535659 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerStarted","Data":"d1f4402fb69794a1a6deb77fd346981fb6d8f2b3bd7eaaad3126ed929b264e54"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.537761 4593 generic.go:334] "Generic (PLEG): container finished" podID="c1755998-9149-49be-b10f-c4fe029728bc" containerID="97aa67ebfa2393a610a45c308a8a4b80642d7f74a23d7c02feada231615c7809" exitCode=0 Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.537830 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1755998-9149-49be-b10f-c4fe029728bc","Type":"ContainerDied","Data":"97aa67ebfa2393a610a45c308a8a4b80642d7f74a23d7c02feada231615c7809"} Jan 29 11:14:49 crc kubenswrapper[4593]: I0129 11:14:49.563047 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5zjts" podStartSLOduration=6.126915528 podStartE2EDuration="40.563030752s" podCreationTimestamp="2026-01-29 11:14:09 +0000 UTC" firstStartedPulling="2026-01-29 11:14:12.782859606 +0000 UTC m=+918.655893797" lastFinishedPulling="2026-01-29 11:14:47.21897483 +0000 UTC m=+953.092009021" observedRunningTime="2026-01-29 11:14:49.556937899 +0000 UTC m=+955.429972120" watchObservedRunningTime="2026-01-29 11:14:49.563030752 +0000 UTC m=+955.436064943" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.311228 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.312274 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.555710 4593 generic.go:334] "Generic (PLEG): container finished" podID="22811af4-f063-480b-81b2-6c09b6526fea" containerID="3c35f96b9e6d360871a4363b31c9b97c03bf9c434960bc17aed93f232b0ef3da" exitCode=0 Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.557906 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerDied","Data":"3c35f96b9e6d360871a4363b31c9b97c03bf9c434960bc17aed93f232b0ef3da"} Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.562700 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerID="cd84694d15788663bcca8f1cea58b3f9c8ab044022df23a01ee0a17afa892276" exitCode=0 Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.562774 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerDied","Data":"cd84694d15788663bcca8f1cea58b3f9c8ab044022df23a01ee0a17afa892276"} Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.565296 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c1755998-9149-49be-b10f-c4fe029728bc","Type":"ContainerStarted","Data":"f10392e8ba068cb86aaf4c0479307405db5a114398a080dc4462c0cf885c71ba"} Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.565611 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-cc9qq" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.609322 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cc9qq" podStartSLOduration=29.949031603999998 podStartE2EDuration="35.609303772s" podCreationTimestamp="2026-01-29 11:14:15 +0000 UTC" firstStartedPulling="2026-01-29 11:14:41.529896643 +0000 UTC m=+947.402930834" lastFinishedPulling="2026-01-29 11:14:47.190168811 +0000 UTC m=+953.063203002" observedRunningTime="2026-01-29 11:14:50.602222203 +0000 UTC m=+956.475256404" watchObservedRunningTime="2026-01-29 11:14:50.609303772 +0000 UTC m=+956.482337963" Jan 29 11:14:50 crc kubenswrapper[4593]: I0129 11:14:50.659325 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.508228658 podStartE2EDuration="43.659307326s" podCreationTimestamp="2026-01-29 11:14:07 +0000 UTC" firstStartedPulling="2026-01-29 11:14:09.731440038 +0000 UTC m=+915.604474239" lastFinishedPulling="2026-01-29 11:14:40.882518716 +0000 UTC m=+946.755552907" observedRunningTime="2026-01-29 11:14:50.648549559 +0000 UTC m=+956.521583760" watchObservedRunningTime="2026-01-29 11:14:50.659307326 +0000 UTC m=+956.532341517" Jan 29 11:14:51 crc kubenswrapper[4593]: I0129 11:14:51.568379 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5zjts" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" probeResult="failure" output=< Jan 29 11:14:51 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:14:51 crc kubenswrapper[4593]: > Jan 29 11:14:51 crc kubenswrapper[4593]: I0129 11:14:51.587695 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6674f537-f800-4b05-912c-b2671e504c17","Type":"ContainerStarted","Data":"191dc09ec9f00c9db76f1bdf3e46d2d35456e3970488e371a323804fbf1f6993"} Jan 29 11:14:51 crc kubenswrapper[4593]: I0129 11:14:51.595185 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerStarted","Data":"21186a30326857d6527171cd31a7d953ddb9db6ca1df416000c061f34f0ee3d1"} Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.608164 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-x49lj" event={"ID":"22811af4-f063-480b-81b2-6c09b6526fea","Type":"ContainerStarted","Data":"105ea44e3a1b6249121d9400cc3e0093a41d887065da0dd822b53606b0838287"} Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.608939 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.608982 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.613030 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerStarted","Data":"af838fa010c8947df25073166fa4b7b48c902b1c9dfcc02609c3d4b2597c538c"} Jan 29 11:14:52 crc kubenswrapper[4593]: I0129 11:14:52.646621 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-x49lj" podStartSLOduration=33.831612102 podStartE2EDuration="37.646598518s" podCreationTimestamp="2026-01-29 11:14:15 +0000 UTC" firstStartedPulling="2026-01-29 11:14:43.375466973 +0000 UTC m=+949.248501164" lastFinishedPulling="2026-01-29 11:14:47.190453389 +0000 UTC m=+953.063487580" observedRunningTime="2026-01-29 11:14:52.643167807 +0000 UTC m=+958.516201998" watchObservedRunningTime="2026-01-29 11:14:52.646598518 +0000 UTC m=+958.519632709" Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.624513 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fd9a4c00-318d-4bd1-85cb-40971234c3cd","Type":"ContainerStarted","Data":"90f76511404af4bd114645242b92da7e485fc55b5702244b6b91afff28db1bce"} Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.626459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9","Type":"ContainerStarted","Data":"886a74852d3d5b1e67156d954d91d303e6f37a4bb0cba5783dd60c45e12a1ad0"} Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.628258 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"dc6f5a6c-3bf0-4f78-89f3-1e2683a37958","Type":"ContainerStarted","Data":"363fef13a5ff1e3a65bb60b6f2eaecb8b1c519fbcf12f35e57117039af0c67ab"} Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.648113 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=31.768157499 podStartE2EDuration="40.648097995s" podCreationTimestamp="2026-01-29 11:14:13 +0000 UTC" firstStartedPulling="2026-01-29 11:14:43.388589443 +0000 UTC m=+949.261623634" lastFinishedPulling="2026-01-29 11:14:52.268529939 +0000 UTC m=+958.141564130" observedRunningTime="2026-01-29 11:14:53.647253182 +0000 UTC m=+959.520287373" watchObservedRunningTime="2026-01-29 11:14:53.648097995 +0000 UTC m=+959.521132186" Jan 29 11:14:53 crc kubenswrapper[4593]: I0129 11:14:53.678562 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.451656216 podStartE2EDuration="36.678524107s" podCreationTimestamp="2026-01-29 11:14:17 +0000 UTC" firstStartedPulling="2026-01-29 11:14:42.049524889 +0000 UTC m=+947.922559080" lastFinishedPulling="2026-01-29 11:14:52.27639278 +0000 UTC m=+958.149426971" observedRunningTime="2026-01-29 11:14:53.67453694 +0000 UTC m=+959.547571141" watchObservedRunningTime="2026-01-29 11:14:53.678524107 +0000 UTC m=+959.551558298" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.002817 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.094078 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.35297166 podStartE2EDuration="46.094054895s" podCreationTimestamp="2026-01-29 11:14:08 +0000 UTC" firstStartedPulling="2026-01-29 11:14:09.800904542 +0000 UTC m=+915.673938723" lastFinishedPulling="2026-01-29 11:14:52.541987767 +0000 UTC m=+958.415021958" observedRunningTime="2026-01-29 11:14:53.70600508 +0000 UTC m=+959.579039271" watchObservedRunningTime="2026-01-29 11:14:54.094054895 +0000 UTC m=+959.967089086" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.517006 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.576859 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:54 crc kubenswrapper[4593]: E0129 11:14:54.609267 4593 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:56878->38.102.83.147:45711: write tcp 38.102.83.147:56878->38.102.83.147:45711: write: broken pipe Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.617490 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.782177 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerID="af838fa010c8947df25073166fa4b7b48c902b1c9dfcc02609c3d4b2597c538c" exitCode=0 Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.783139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerDied","Data":"af838fa010c8947df25073166fa4b7b48c902b1c9dfcc02609c3d4b2597c538c"} Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.787510 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:54 crc kubenswrapper[4593]: I0129 11:14:54.856524 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.474358 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-g6lk4"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.476268 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.480330 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.489531 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g6lk4"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.537951 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.579751 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.581322 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.590265 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605775 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605800 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn72l\" (UniqueName: \"kubernetes.io/projected/9299d646-8191-4da6-a2d1-d5a8c6492e91-kube-api-access-zn72l\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605881 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovn-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605905 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-combined-ca-bundle\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605960 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.605987 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299d646-8191-4da6-a2d1-d5a8c6492e91-config\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.606031 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovs-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.707824 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.707905 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299d646-8191-4da6-a2d1-d5a8c6492e91-config\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.707958 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.707998 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovs-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708083 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn72l\" (UniqueName: \"kubernetes.io/projected/9299d646-8191-4da6-a2d1-d5a8c6492e91-kube-api-access-zn72l\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708120 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708178 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708207 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovn-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708238 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-combined-ca-bundle\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.708906 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovn-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.709509 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299d646-8191-4da6-a2d1-d5a8c6492e91-config\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.710148 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9299d646-8191-4da6-a2d1-d5a8c6492e91-ovs-rundir\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.715329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-combined-ca-bundle\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.716075 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9299d646-8191-4da6-a2d1-d5a8c6492e91-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.732121 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn72l\" (UniqueName: \"kubernetes.io/projected/9299d646-8191-4da6-a2d1-d5a8c6492e91-kube-api-access-zn72l\") pod \"ovn-controller-metrics-g6lk4\" (UID: \"9299d646-8191-4da6-a2d1-d5a8c6492e91\") " pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.805667 4593 generic.go:334] "Generic (PLEG): container finished" podID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" containerID="544b0e0df1d380946a3e8080c9c9fb0744ffc4f89a7dc3a91498dc76d46dd2a7" exitCode=0 Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.805766 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" event={"ID":"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b","Type":"ContainerDied","Data":"544b0e0df1d380946a3e8080c9c9fb0744ffc4f89a7dc3a91498dc76d46dd2a7"} Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.813892 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.814016 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.814052 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.814116 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.815154 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.815154 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.815892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.826774 4593 generic.go:334] "Generic (PLEG): container finished" podID="6674f537-f800-4b05-912c-b2671e504c17" containerID="191dc09ec9f00c9db76f1bdf3e46d2d35456e3970488e371a323804fbf1f6993" exitCode=0 Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.826895 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6674f537-f800-4b05-912c-b2671e504c17","Type":"ContainerDied","Data":"191dc09ec9f00c9db76f1bdf3e46d2d35456e3970488e371a323804fbf1f6993"} Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.840694 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") pod \"dnsmasq-dns-6bc7876d45-lw6d5\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.847325 4593 generic.go:334] "Generic (PLEG): container finished" podID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerID="da7810f16f10ab271866380a9652b5504d930f59d786d1df10f9e1a22d6586a4" exitCode=0 Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.847840 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerDied","Data":"da7810f16f10ab271866380a9652b5504d930f59d786d1df10f9e1a22d6586a4"} Jan 29 11:14:55 crc kubenswrapper[4593]: I0129 11:14:55.871758 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-g6lk4" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.112718 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.226826 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.268230 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.271190 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.283988 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.297142 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405205 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405765 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405820 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405847 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.405945 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512247 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512307 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512327 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512344 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.512400 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.513500 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.513721 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.513893 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.518129 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.521015 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:56 crc kubenswrapper[4593]: I0129 11:14:56.546970 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") pod \"dnsmasq-dns-8554648995-cgm9z\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.139605 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.179912 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerStarted","Data":"4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134"} Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.232108 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hnrxg" podStartSLOduration=5.264674498 podStartE2EDuration="10.232090397s" podCreationTimestamp="2026-01-29 11:14:47 +0000 UTC" firstStartedPulling="2026-01-29 11:14:50.565887163 +0000 UTC m=+956.438921354" lastFinishedPulling="2026-01-29 11:14:55.533303062 +0000 UTC m=+961.406337253" observedRunningTime="2026-01-29 11:14:57.220165448 +0000 UTC m=+963.093199639" watchObservedRunningTime="2026-01-29 11:14:57.232090397 +0000 UTC m=+963.105124588" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.328989 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.370703 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.458147 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") pod \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.458389 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") pod \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.458446 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") pod \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\" (UID: \"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b\") " Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.466922 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr" (OuterVolumeSpecName: "kube-api-access-xr9cr") pod "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" (UID: "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b"). InnerVolumeSpecName "kube-api-access-xr9cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.524121 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.540154 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" (UID: "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.547232 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config" (OuterVolumeSpecName: "config") pod "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" (UID: "4f968f6f-3c5b-4e45-baf2-cf20ac696d9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:14:57 crc kubenswrapper[4593]: W0129 11:14:57.553268 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9288612d_73d6_410c_b109_9d3124e96f9c.slice/crio-55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d WatchSource:0}: Error finding container 55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d: Status 404 returned error can't find the container with id 55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.560499 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr9cr\" (UniqueName: \"kubernetes.io/projected/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-kube-api-access-xr9cr\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.560954 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.560970 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:14:57 crc kubenswrapper[4593]: W0129 11:14:57.574097 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9299d646_8191_4da6_a2d1_d5a8c6492e91.slice/crio-e521fb641da25c817d3aafbd3daac480e597cf3fc2cab17e3df92fecd539f3c3 WatchSource:0}: Error finding container e521fb641da25c817d3aafbd3daac480e597cf3fc2cab17e3df92fecd539f3c3: Status 404 returned error can't find the container with id e521fb641da25c817d3aafbd3daac480e597cf3fc2cab17e3df92fecd539f3c3 Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.575373 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-g6lk4"] Jan 29 11:14:57 crc kubenswrapper[4593]: I0129 11:14:57.908104 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:14:57 crc kubenswrapper[4593]: W0129 11:14:57.917448 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba134367_9e72_466a_8aa3_0bda1deb7791.slice/crio-03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6 WatchSource:0}: Error finding container 03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6: Status 404 returned error can't find the container with id 03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6 Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.188161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" event={"ID":"4f968f6f-3c5b-4e45-baf2-cf20ac696d9b","Type":"ContainerDied","Data":"007a02e651669e8d70d7d24081e75b51bae9e37c2bf6d5643b4ba609d3b0011b"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.188218 4593 scope.go:117] "RemoveContainer" containerID="544b0e0df1d380946a3e8080c9c9fb0744ffc4f89a7dc3a91498dc76d46dd2a7" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.188175 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-4mvwn" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.193354 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6674f537-f800-4b05-912c-b2671e504c17","Type":"ContainerStarted","Data":"632fdf977b3a3ad2d924089de4c26155a1b12bab23fab4b4d2a285a437c1b589"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.196201 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerStarted","Data":"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.196254 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerStarted","Data":"55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.198542 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1512a75d-a403-420b-a9be-f931faf1273a","Type":"ContainerStarted","Data":"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.198826 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.200365 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerStarted","Data":"4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.200462 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="dnsmasq-dns" containerID="cri-o://4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795" gracePeriod=10 Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.200533 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.206404 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g6lk4" event={"ID":"9299d646-8191-4da6-a2d1-d5a8c6492e91","Type":"ContainerStarted","Data":"e521fb641da25c817d3aafbd3daac480e597cf3fc2cab17e3df92fecd539f3c3"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.210400 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerStarted","Data":"03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6"} Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.245562 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371984.609236 podStartE2EDuration="52.24553957s" podCreationTimestamp="2026-01-29 11:14:06 +0000 UTC" firstStartedPulling="2026-01-29 11:14:08.685937797 +0000 UTC m=+914.558971988" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:14:58.231583948 +0000 UTC m=+964.104618139" watchObservedRunningTime="2026-01-29 11:14:58.24553957 +0000 UTC m=+964.118573761" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.263500 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" podStartSLOduration=4.754770744 podStartE2EDuration="54.263484049s" podCreationTimestamp="2026-01-29 11:14:04 +0000 UTC" firstStartedPulling="2026-01-29 11:14:05.344318534 +0000 UTC m=+911.217352725" lastFinishedPulling="2026-01-29 11:14:54.853031839 +0000 UTC m=+960.726066030" observedRunningTime="2026-01-29 11:14:58.253069532 +0000 UTC m=+964.126103723" watchObservedRunningTime="2026-01-29 11:14:58.263484049 +0000 UTC m=+964.136518240" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.270003 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.220986754 podStartE2EDuration="48.269985473s" podCreationTimestamp="2026-01-29 11:14:10 +0000 UTC" firstStartedPulling="2026-01-29 11:14:11.933272335 +0000 UTC m=+917.806306526" lastFinishedPulling="2026-01-29 11:14:55.982271054 +0000 UTC m=+961.855305245" observedRunningTime="2026-01-29 11:14:58.269149581 +0000 UTC m=+964.142183772" watchObservedRunningTime="2026-01-29 11:14:58.269985473 +0000 UTC m=+964.143019664" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.290512 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.291552 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.299709 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.321845 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.326055 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-4mvwn"] Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.518488 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:14:58 crc kubenswrapper[4593]: E0129 11:14:58.518974 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" containerName="init" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.518988 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" containerName="init" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.519211 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" containerName="init" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.520159 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.524967 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.525287 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.649102 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.649358 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4nb56" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.710932 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.756701 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-config\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.756896 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.756951 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.756981 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-scripts\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.757029 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.757099 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k58x\" (UniqueName: \"kubernetes.io/projected/5320cc21-470d-450c-afa0-c5926e3243c6-kube-api-access-5k58x\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.757159 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858087 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858172 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858204 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-scripts\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858248 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858307 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k58x\" (UniqueName: \"kubernetes.io/projected/5320cc21-470d-450c-afa0-c5926e3243c6-kube-api-access-5k58x\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858351 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.858385 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-config\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.859143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.860112 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-scripts\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.863236 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5320cc21-470d-450c-afa0-c5926e3243c6-config\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.864521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.866515 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.867267 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5320cc21-470d-450c-afa0-c5926e3243c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.888855 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k58x\" (UniqueName: \"kubernetes.io/projected/5320cc21-470d-450c-afa0-c5926e3243c6-kube-api-access-5k58x\") pod \"ovn-northd-0\" (UID: \"5320cc21-470d-450c-afa0-c5926e3243c6\") " pod="openstack/ovn-northd-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.916034 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:58 crc kubenswrapper[4593]: I0129 11:14:58.916396 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.018791 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.027250 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.132132 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f968f6f-3c5b-4e45-baf2-cf20ac696d9b" path="/var/lib/kubelet/pods/4f968f6f-3c5b-4e45-baf2-cf20ac696d9b/volumes" Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.233744 4593 generic.go:334] "Generic (PLEG): container finished" podID="9288612d-73d6-410c-b109-9d3124e96f9c" containerID="56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597" exitCode=0 Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.234106 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerDied","Data":"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597"} Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.238258 4593 generic.go:334] "Generic (PLEG): container finished" podID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerID="4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795" exitCode=0 Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.238596 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerDied","Data":"4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795"} Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.382302 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hnrxg" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" probeResult="failure" output=< Jan 29 11:14:59 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:14:59 crc kubenswrapper[4593]: > Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.720384 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 29 11:14:59 crc kubenswrapper[4593]: W0129 11:14:59.742510 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5320cc21_470d_450c_afa0_c5926e3243c6.slice/crio-dda9112777bee58c842e5e0f470559789a6d1545c7d7ee715e8c3a8ebdf8afb5 WatchSource:0}: Error finding container dda9112777bee58c842e5e0f470559789a6d1545c7d7ee715e8c3a8ebdf8afb5: Status 404 returned error can't find the container with id dda9112777bee58c842e5e0f470559789a6d1545c7d7ee715e8c3a8ebdf8afb5 Jan 29 11:14:59 crc kubenswrapper[4593]: I0129 11:14:59.934486 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.069893 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") pod \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.069986 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") pod \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.070136 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") pod \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\" (UID: \"7e7df070-9e8b-4e24-ac24-4593ef89aca9\") " Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.092411 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc" (OuterVolumeSpecName: "kube-api-access-8pqcc") pod "7e7df070-9e8b-4e24-ac24-4593ef89aca9" (UID: "7e7df070-9e8b-4e24-ac24-4593ef89aca9"). InnerVolumeSpecName "kube-api-access-8pqcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.139508 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e7df070-9e8b-4e24-ac24-4593ef89aca9" (UID: "7e7df070-9e8b-4e24-ac24-4593ef89aca9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.164052 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config" (OuterVolumeSpecName: "config") pod "7e7df070-9e8b-4e24-ac24-4593ef89aca9" (UID: "7e7df070-9e8b-4e24-ac24-4593ef89aca9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.173066 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.173095 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e7df070-9e8b-4e24-ac24-4593ef89aca9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.173106 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pqcc\" (UniqueName: \"kubernetes.io/projected/7e7df070-9e8b-4e24-ac24-4593ef89aca9-kube-api-access-8pqcc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.178184 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 11:15:00 crc kubenswrapper[4593]: E0129 11:15:00.178587 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="dnsmasq-dns" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.178610 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="dnsmasq-dns" Jan 29 11:15:00 crc kubenswrapper[4593]: E0129 11:15:00.178685 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="init" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.178697 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="init" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.178909 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" containerName="dnsmasq-dns" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.179589 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.182396 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.182604 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.183714 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.197884 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.268377 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" event={"ID":"7e7df070-9e8b-4e24-ac24-4593ef89aca9","Type":"ContainerDied","Data":"565dedef28a6391201b894212d9023a697aa75bba8630f014fc28b15721c946e"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.270029 4593 scope.go:117] "RemoveContainer" containerID="4f21c2eef273f8566ecba7a486c08323d148beb0c5639f76f2c2c3529cc80795" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.270312 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bvbjq" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.282735 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.284390 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.285010 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.286190 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-g6lk4" event={"ID":"9299d646-8191-4da6-a2d1-d5a8c6492e91","Type":"ContainerStarted","Data":"f65f29a02a36886ce3d7e342d32921b0f906594c830d8f38f18fb6431ad3619e"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.290598 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5320cc21-470d-450c-afa0-c5926e3243c6","Type":"ContainerStarted","Data":"dda9112777bee58c842e5e0f470559789a6d1545c7d7ee715e8c3a8ebdf8afb5"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.295619 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerID="42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c" exitCode=0 Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.295701 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerDied","Data":"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.309581 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerStarted","Data":"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92"} Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.335644 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-g6lk4" podStartSLOduration=5.335582595 podStartE2EDuration="5.335582595s" podCreationTimestamp="2026-01-29 11:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:00.327117419 +0000 UTC m=+966.200151610" watchObservedRunningTime="2026-01-29 11:15:00.335582595 +0000 UTC m=+966.208616786" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.341463 4593 scope.go:117] "RemoveContainer" containerID="da7810f16f10ab271866380a9652b5504d930f59d786d1df10f9e1a22d6586a4" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.387772 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.387859 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.387940 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.389147 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.415059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.464831 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.464897 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.471294 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bvbjq"] Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.476135 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" podStartSLOduration=5.476114055 podStartE2EDuration="5.476114055s" podCreationTimestamp="2026-01-29 11:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:00.464591228 +0000 UTC m=+966.337625449" watchObservedRunningTime="2026-01-29 11:15:00.476114055 +0000 UTC m=+966.349148236" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.494057 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") pod \"collect-profiles-29494755-htvh8\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.556586 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.559528 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.609818 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 29 11:15:00 crc kubenswrapper[4593]: I0129 11:15:00.750927 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.039606 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.094870 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7df070-9e8b-4e24-ac24-4593ef89aca9" path="/var/lib/kubelet/pods/7e7df070-9e8b-4e24-ac24-4593ef89aca9/volumes" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.121884 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.140927 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.142469 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.177491 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.359910 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.359994 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.360018 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.360043 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.360074 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.376015 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerStarted","Data":"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a"} Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.376286 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.398054 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-cgm9z" podStartSLOduration=5.398033327 podStartE2EDuration="5.398033327s" podCreationTimestamp="2026-01-29 11:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:01.396207729 +0000 UTC m=+967.269241920" watchObservedRunningTime="2026-01-29 11:15:01.398033327 +0000 UTC m=+967.271067518" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461616 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461733 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.461864 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.462585 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.462799 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.462811 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.463668 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.494450 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") pod \"dnsmasq-dns-b8fbc5445-lm2dg\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.509076 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:01 crc kubenswrapper[4593]: I0129 11:15:01.570241 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 11:15:02 crc kubenswrapper[4593]: W0129 11:15:02.003400 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d624d92_85b0_48dc_94f4_047ac84aaa0c.slice/crio-e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc WatchSource:0}: Error finding container e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc: Status 404 returned error can't find the container with id e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.124511 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.129515 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.135620 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mpxfb" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.135872 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.136012 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.137431 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.178507 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278674 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-lock\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278715 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278736 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307ad072-fdfc-4c55-8891-bc041d755b1a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278811 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-cache\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.278880 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4pwv\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-kube-api-access-k4pwv\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.320407 4593 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:54944->38.102.83.147:45711: write tcp 38.102.83.147:54944->38.102.83.147:45711: write: broken pipe Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.381909 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-lock\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.381966 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.381987 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307ad072-fdfc-4c55-8891-bc041d755b1a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.382051 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-cache\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.382130 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.382151 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4pwv\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-kube-api-access-k4pwv\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.383129 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-lock\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.383210 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.383222 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.383257 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:02.883243179 +0000 UTC m=+968.756277370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.383806 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.384143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/307ad072-fdfc-4c55-8891-bc041d755b1a-cache\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.395757 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/307ad072-fdfc-4c55-8891-bc041d755b1a-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.399219 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" event={"ID":"8d624d92-85b0-48dc-94f4-047ac84aaa0c","Type":"ContainerStarted","Data":"c821139e8b0317636f7e45a909cbff9ea156a76bb671f91a36836e985d04e36c"} Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.399310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" event={"ID":"8d624d92-85b0-48dc-94f4-047ac84aaa0c","Type":"ContainerStarted","Data":"e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc"} Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.402466 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="dnsmasq-dns" containerID="cri-o://2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" gracePeriod=10 Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.402926 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5zjts" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" containerID="cri-o://77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" gracePeriod=2 Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.420766 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.429025 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" podStartSLOduration=2.428997379 podStartE2EDuration="2.428997379s" podCreationTimestamp="2026-01-29 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:02.426613496 +0000 UTC m=+968.299647687" watchObservedRunningTime="2026-01-29 11:15:02.428997379 +0000 UTC m=+968.302031580" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.430522 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4pwv\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-kube-api-access-k4pwv\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.569023 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:15:02 crc kubenswrapper[4593]: I0129 11:15:02.900707 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.900907 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.901103 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:02 crc kubenswrapper[4593]: E0129 11:15:02.901154 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:03.901138028 +0000 UTC m=+969.774172209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.006865 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.011614 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103281 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") pod \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103528 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") pod \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103554 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") pod \"9288612d-73d6-410c-b109-9d3124e96f9c\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103583 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") pod \"9288612d-73d6-410c-b109-9d3124e96f9c\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103609 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") pod \"9288612d-73d6-410c-b109-9d3124e96f9c\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103653 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") pod \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\" (UID: \"80b1ef7b-9dfd-4910-99a8-830a1735fb79\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.103685 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") pod \"9288612d-73d6-410c-b109-9d3124e96f9c\" (UID: \"9288612d-73d6-410c-b109-9d3124e96f9c\") " Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.107103 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities" (OuterVolumeSpecName: "utilities") pod "80b1ef7b-9dfd-4910-99a8-830a1735fb79" (UID: "80b1ef7b-9dfd-4910-99a8-830a1735fb79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.112767 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf" (OuterVolumeSpecName: "kube-api-access-xj4vf") pod "9288612d-73d6-410c-b109-9d3124e96f9c" (UID: "9288612d-73d6-410c-b109-9d3124e96f9c"). InnerVolumeSpecName "kube-api-access-xj4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.164745 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6" (OuterVolumeSpecName: "kube-api-access-njvk6") pod "80b1ef7b-9dfd-4910-99a8-830a1735fb79" (UID: "80b1ef7b-9dfd-4910-99a8-830a1735fb79"). InnerVolumeSpecName "kube-api-access-njvk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.204835 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80b1ef7b-9dfd-4910-99a8-830a1735fb79" (UID: "80b1ef7b-9dfd-4910-99a8-830a1735fb79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.205883 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.205896 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njvk6\" (UniqueName: \"kubernetes.io/projected/80b1ef7b-9dfd-4910-99a8-830a1735fb79-kube-api-access-njvk6\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.205907 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj4vf\" (UniqueName: \"kubernetes.io/projected/9288612d-73d6-410c-b109-9d3124e96f9c-kube-api-access-xj4vf\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.205915 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80b1ef7b-9dfd-4910-99a8-830a1735fb79-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.208551 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config" (OuterVolumeSpecName: "config") pod "9288612d-73d6-410c-b109-9d3124e96f9c" (UID: "9288612d-73d6-410c-b109-9d3124e96f9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.209248 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9288612d-73d6-410c-b109-9d3124e96f9c" (UID: "9288612d-73d6-410c-b109-9d3124e96f9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.265738 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9288612d-73d6-410c-b109-9d3124e96f9c" (UID: "9288612d-73d6-410c-b109-9d3124e96f9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.307395 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.307418 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.307428 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9288612d-73d6-410c-b109-9d3124e96f9c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.411360 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5320cc21-470d-450c-afa0-c5926e3243c6","Type":"ContainerStarted","Data":"09f9724e79bce4ee329a8c8bec5b3420af1adbdb15836f3d8b44fdfd68055ebc"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.411413 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5320cc21-470d-450c-afa0-c5926e3243c6","Type":"ContainerStarted","Data":"2ed636bf32d447bd13812d8ebeaa5f27d6a5644f848884b286c9f4f83292c007"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.412568 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415525 4593 generic.go:334] "Generic (PLEG): container finished" podID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerID="77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" exitCode=0 Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415584 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5zjts" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415673 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerDied","Data":"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415716 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5zjts" event={"ID":"80b1ef7b-9dfd-4910-99a8-830a1735fb79","Type":"ContainerDied","Data":"ade31aca7ba29e2371128a860beb89fe80c8c2fbd7528ceac5d2035097f7e6ad"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.415772 4593 scope.go:117] "RemoveContainer" containerID="77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.418247 4593 generic.go:334] "Generic (PLEG): container finished" podID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" containerID="c821139e8b0317636f7e45a909cbff9ea156a76bb671f91a36836e985d04e36c" exitCode=0 Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.418288 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" event={"ID":"8d624d92-85b0-48dc-94f4-047ac84aaa0c","Type":"ContainerDied","Data":"c821139e8b0317636f7e45a909cbff9ea156a76bb671f91a36836e985d04e36c"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.420199 4593 generic.go:334] "Generic (PLEG): container finished" podID="9288612d-73d6-410c-b109-9d3124e96f9c" containerID="2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" exitCode=0 Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.420245 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerDied","Data":"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.420260 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" event={"ID":"9288612d-73d6-410c-b109-9d3124e96f9c","Type":"ContainerDied","Data":"55fb6adab579ff40463d7f5f9cf1505c1fa8ef85800ff903e67f7aacf830b70d"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.420272 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-lw6d5" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.421763 4593 generic.go:334] "Generic (PLEG): container finished" podID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerID="3a1884f5780e941a8c795fbe0356484ff14b38b8354e043148a53f7b7fef73d5" exitCode=0 Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.421788 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerDied","Data":"3a1884f5780e941a8c795fbe0356484ff14b38b8354e043148a53f7b7fef73d5"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.421803 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerStarted","Data":"2b0a11af2b235a2fb8adafd584c05dc53c5aec7086cbb35dcb104dd6b636f9bc"} Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.452045 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.151461438 podStartE2EDuration="5.45202704s" podCreationTimestamp="2026-01-29 11:14:58 +0000 UTC" firstStartedPulling="2026-01-29 11:14:59.751004436 +0000 UTC m=+965.624038627" lastFinishedPulling="2026-01-29 11:15:02.051570038 +0000 UTC m=+967.924604229" observedRunningTime="2026-01-29 11:15:03.441140529 +0000 UTC m=+969.314174720" watchObservedRunningTime="2026-01-29 11:15:03.45202704 +0000 UTC m=+969.325061231" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.455586 4593 scope.go:117] "RemoveContainer" containerID="9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.494513 4593 scope.go:117] "RemoveContainer" containerID="88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.518077 4593 scope.go:117] "RemoveContainer" containerID="77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.518482 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be\": container with ID starting with 77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be not found: ID does not exist" containerID="77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.518587 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be"} err="failed to get container status \"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be\": rpc error: code = NotFound desc = could not find container \"77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be\": container with ID starting with 77efa027816de776464e0940fd5bce08b6a4290d0af1ab6b28b714dc35a913be not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.518617 4593 scope.go:117] "RemoveContainer" containerID="9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.519617 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.520541 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96\": container with ID starting with 9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96 not found: ID does not exist" containerID="9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.520649 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96"} err="failed to get container status \"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96\": rpc error: code = NotFound desc = could not find container \"9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96\": container with ID starting with 9bb1171a6467cebf0bf64e79b5500c99261d694fa11543b2d01d7b0ddcbaec96 not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.520669 4593 scope.go:117] "RemoveContainer" containerID="88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.520930 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351\": container with ID starting with 88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351 not found: ID does not exist" containerID="88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.520950 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351"} err="failed to get container status \"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351\": rpc error: code = NotFound desc = could not find container \"88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351\": container with ID starting with 88f786f78b398f505ec5a44af965fed646d1e70bc02feb0e5bb5b6e39bfa9351 not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.520962 4593 scope.go:117] "RemoveContainer" containerID="2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.535125 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-lw6d5"] Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.543129 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.557610 4593 scope.go:117] "RemoveContainer" containerID="56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.559505 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5zjts"] Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.591120 4593 scope.go:117] "RemoveContainer" containerID="2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.592123 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92\": container with ID starting with 2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92 not found: ID does not exist" containerID="2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.592160 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92"} err="failed to get container status \"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92\": rpc error: code = NotFound desc = could not find container \"2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92\": container with ID starting with 2ef61e3b91c1c3e6e252646d712ea2fdfcde408704d5e98a8540b0b3553ebe92 not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.592189 4593 scope.go:117] "RemoveContainer" containerID="56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.592462 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597\": container with ID starting with 56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597 not found: ID does not exist" containerID="56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.592497 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597"} err="failed to get container status \"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597\": rpc error: code = NotFound desc = could not find container \"56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597\": container with ID starting with 56f4c64d6413cc5bc4edfcf3047aa5b45a567cb527bc710b266d604cfb388597 not found: ID does not exist" Jan 29 11:15:03 crc kubenswrapper[4593]: I0129 11:15:03.919581 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.919849 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.920051 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:03 crc kubenswrapper[4593]: E0129 11:15:03.920137 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:05.92010146 +0000 UTC m=+971.793135651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:04 crc kubenswrapper[4593]: I0129 11:15:04.436253 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerStarted","Data":"3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14"} Jan 29 11:15:04 crc kubenswrapper[4593]: I0129 11:15:04.436701 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:04 crc kubenswrapper[4593]: I0129 11:15:04.860486 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:04 crc kubenswrapper[4593]: I0129 11:15:04.883060 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podStartSLOduration=3.883039157 podStartE2EDuration="3.883039157s" podCreationTimestamp="2026-01-29 11:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:04.468052274 +0000 UTC m=+970.341086495" watchObservedRunningTime="2026-01-29 11:15:04.883039157 +0000 UTC m=+970.756073348" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.037900 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") pod \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.038042 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") pod \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.038072 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") pod \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\" (UID: \"8d624d92-85b0-48dc-94f4-047ac84aaa0c\") " Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.038745 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d624d92-85b0-48dc-94f4-047ac84aaa0c" (UID: "8d624d92-85b0-48dc-94f4-047ac84aaa0c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.044758 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d624d92-85b0-48dc-94f4-047ac84aaa0c" (UID: "8d624d92-85b0-48dc-94f4-047ac84aaa0c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.060790 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j" (OuterVolumeSpecName: "kube-api-access-j4g5j") pod "8d624d92-85b0-48dc-94f4-047ac84aaa0c" (UID: "8d624d92-85b0-48dc-94f4-047ac84aaa0c"). InnerVolumeSpecName "kube-api-access-j4g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.087068 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" path="/var/lib/kubelet/pods/80b1ef7b-9dfd-4910-99a8-830a1735fb79/volumes" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.088346 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" path="/var/lib/kubelet/pods/9288612d-73d6-410c-b109-9d3124e96f9c/volumes" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.140297 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d624d92-85b0-48dc-94f4-047ac84aaa0c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.140339 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4g5j\" (UniqueName: \"kubernetes.io/projected/8d624d92-85b0-48dc-94f4-047ac84aaa0c-kube-api-access-j4g5j\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.140356 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d624d92-85b0-48dc-94f4-047ac84aaa0c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.448515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" event={"ID":"8d624d92-85b0-48dc-94f4-047ac84aaa0c","Type":"ContainerDied","Data":"e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc"} Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.448572 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ffc8f638f234f1e4b2a1ef92c4d24c5debc912008dd0a9b438d90833fbf3dc" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.448734 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8" Jan 29 11:15:05 crc kubenswrapper[4593]: I0129 11:15:05.951933 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:05 crc kubenswrapper[4593]: E0129 11:15:05.952133 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:05 crc kubenswrapper[4593]: E0129 11:15:05.952308 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:05 crc kubenswrapper[4593]: E0129 11:15:05.952352 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:09.952338873 +0000 UTC m=+975.825373064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034013 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jbnzf"] Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034443 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034467 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034483 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="init" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034491 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="init" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034512 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="dnsmasq-dns" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034521 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="dnsmasq-dns" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034534 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="extract-utilities" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034543 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="extract-utilities" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034558 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="extract-content" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034566 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="extract-content" Jan 29 11:15:06 crc kubenswrapper[4593]: E0129 11:15:06.034579 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" containerName="collect-profiles" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034586 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" containerName="collect-profiles" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034785 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" containerName="collect-profiles" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034798 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b1ef7b-9dfd-4910-99a8-830a1735fb79" containerName="registry-server" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.034811 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9288612d-73d6-410c-b109-9d3124e96f9c" containerName="dnsmasq-dns" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.035295 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.037660 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.037798 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.037924 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.050661 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jbnzf"] Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.154588 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.154771 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.154885 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.154913 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.155061 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.155105 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.155189 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257169 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257207 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257243 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257343 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257370 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257415 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.257429 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.259166 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.259572 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.260380 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.263793 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.264059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.277053 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.278353 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") pod \"swift-ring-rebalance-jbnzf\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.352000 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:06 crc kubenswrapper[4593]: W0129 11:15:06.807155 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d1e7e96_e120_43f1_bff0_ea3d624e621b.slice/crio-d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6 WatchSource:0}: Error finding container d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6: Status 404 returned error can't find the container with id d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6 Jan 29 11:15:06 crc kubenswrapper[4593]: I0129 11:15:06.809301 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jbnzf"] Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.145016 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.307523 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.309079 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.324378 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.328727 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.472057 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jbnzf" event={"ID":"4d1e7e96-e120-43f1-bff0-ea3d624e621b","Type":"ContainerStarted","Data":"d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6"} Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.480111 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.480294 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.521959 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.522047 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.582572 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.582727 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.583590 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.616960 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.617722 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") pod \"root-account-create-update-87bhd\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:07 crc kubenswrapper[4593]: I0129 11:15:07.630953 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.098453 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.356024 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.441466 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.491556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-87bhd" event={"ID":"d8a9eb9e-18f2-4150-973c-2e7baaca3484","Type":"ContainerStarted","Data":"2d726601a06f0f3b078ac9cfab32d3c08235958370c6a2e0cae055cc410e3e0d"} Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.491601 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-87bhd" event={"ID":"d8a9eb9e-18f2-4150-973c-2e7baaca3484","Type":"ContainerStarted","Data":"2d499c9f38de6188424842997bab2cb4adbe4ba156fe5f3bb80b847c37491bff"} Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.513783 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-87bhd" podStartSLOduration=1.513761806 podStartE2EDuration="1.513761806s" podCreationTimestamp="2026-01-29 11:15:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:08.509861022 +0000 UTC m=+974.382895213" watchObservedRunningTime="2026-01-29 11:15:08.513761806 +0000 UTC m=+974.386795997" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.582184 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.600203 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.798098 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.799065 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.812339 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.918262 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.918463 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.926288 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.927338 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.929400 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 29 11:15:08 crc kubenswrapper[4593]: I0129 11:15:08.952390 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.020027 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.020092 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.020123 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.020183 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.021262 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.073485 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") pod \"placement-db-create-c4fzt\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.124764 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.131264 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.131358 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.135305 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.152818 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") pod \"placement-c3a7-account-create-update-9b49r\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.252983 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.376055 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.377322 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.385648 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.490678 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.491893 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.499397 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.500204 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.514522 4593 generic.go:334] "Generic (PLEG): container finished" podID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" containerID="2d726601a06f0f3b078ac9cfab32d3c08235958370c6a2e0cae055cc410e3e0d" exitCode=0 Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.515521 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-87bhd" event={"ID":"d8a9eb9e-18f2-4150-973c-2e7baaca3484","Type":"ContainerDied","Data":"2d726601a06f0f3b078ac9cfab32d3c08235958370c6a2e0cae055cc410e3e0d"} Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.515736 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hnrxg" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" containerID="cri-o://4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134" gracePeriod=2 Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.538492 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.538611 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.639714 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.639811 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.639863 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.639914 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.640706 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.671397 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") pod \"glance-db-create-cjzzm\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.695227 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.741050 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.741192 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.741892 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.763398 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") pod \"glance-70b0-account-create-update-c8qbm\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:09 crc kubenswrapper[4593]: I0129 11:15:09.816398 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:10 crc kubenswrapper[4593]: I0129 11:15:10.044797 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:10 crc kubenswrapper[4593]: E0129 11:15:10.045395 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:10 crc kubenswrapper[4593]: E0129 11:15:10.045415 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:10 crc kubenswrapper[4593]: E0129 11:15:10.045481 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:18.04546257 +0000 UTC m=+983.918496761 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:10 crc kubenswrapper[4593]: I0129 11:15:10.527875 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerID="4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134" exitCode=0 Jan 29 11:15:10 crc kubenswrapper[4593]: I0129 11:15:10.528194 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerDied","Data":"4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134"} Jan 29 11:15:10 crc kubenswrapper[4593]: I0129 11:15:10.914975 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.511409 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.607241 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.607495 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-cgm9z" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" containerID="cri-o://16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" gracePeriod=10 Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.722549 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.776249 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") pod \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.776336 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") pod \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\" (UID: \"d8a9eb9e-18f2-4150-973c-2e7baaca3484\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.777797 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8a9eb9e-18f2-4150-973c-2e7baaca3484" (UID: "d8a9eb9e-18f2-4150-973c-2e7baaca3484"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.796181 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk" (OuterVolumeSpecName: "kube-api-access-qn2fk") pod "d8a9eb9e-18f2-4150-973c-2e7baaca3484" (UID: "d8a9eb9e-18f2-4150-973c-2e7baaca3484"). InnerVolumeSpecName "kube-api-access-qn2fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.878864 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8a9eb9e-18f2-4150-973c-2e7baaca3484-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.879231 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn2fk\" (UniqueName: \"kubernetes.io/projected/d8a9eb9e-18f2-4150-973c-2e7baaca3484-kube-api-access-qn2fk\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.887988 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.983134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") pod \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.983173 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") pod \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.983258 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") pod \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\" (UID: \"ba99bea9-cf82-4eb7-8c7b-f171c534fc62\") " Jan 29 11:15:11 crc kubenswrapper[4593]: I0129 11:15:11.984503 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities" (OuterVolumeSpecName: "utilities") pod "ba99bea9-cf82-4eb7-8c7b-f171c534fc62" (UID: "ba99bea9-cf82-4eb7-8c7b-f171c534fc62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.000111 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62" (OuterVolumeSpecName: "kube-api-access-jgv62") pod "ba99bea9-cf82-4eb7-8c7b-f171c534fc62" (UID: "ba99bea9-cf82-4eb7-8c7b-f171c534fc62"). InnerVolumeSpecName "kube-api-access-jgv62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.073166 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba99bea9-cf82-4eb7-8c7b-f171c534fc62" (UID: "ba99bea9-cf82-4eb7-8c7b-f171c534fc62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.085034 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.085062 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.085073 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgv62\" (UniqueName: \"kubernetes.io/projected/ba99bea9-cf82-4eb7-8c7b-f171c534fc62-kube-api-access-jgv62\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.144431 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-cgm9z" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: connect: connection refused" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.422307 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.453223 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.557567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cjzzm" event={"ID":"e2687b78-f425-4fae-9af8-7021f3e01e36","Type":"ContainerStarted","Data":"69543955059b6a02d7efbea367354349bec1818ede0d3acfb63fa9c3aa6c1a0a"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.559975 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnrxg" event={"ID":"ba99bea9-cf82-4eb7-8c7b-f171c534fc62","Type":"ContainerDied","Data":"d1f4402fb69794a1a6deb77fd346981fb6d8f2b3bd7eaaad3126ed929b264e54"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.560008 4593 scope.go:117] "RemoveContainer" containerID="4895474b2f5eeb052b2d990d58ef03a99f4466ec22ffd294eacac21fca622134" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.560115 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnrxg" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.565473 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jbnzf" event={"ID":"4d1e7e96-e120-43f1-bff0-ea3d624e621b","Type":"ContainerStarted","Data":"9ea8033b0ead06e96b066f4d434b2b21ca12373b475b3c1f489d3e7beb1ea468"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.582891 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-87bhd" event={"ID":"d8a9eb9e-18f2-4150-973c-2e7baaca3484","Type":"ContainerDied","Data":"2d499c9f38de6188424842997bab2cb4adbe4ba156fe5f3bb80b847c37491bff"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.582932 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d499c9f38de6188424842997bab2cb4adbe4ba156fe5f3bb80b847c37491bff" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.582987 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-87bhd" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.590978 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.592216 4593 generic.go:334] "Generic (PLEG): container finished" podID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerID="16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" exitCode=0 Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.592251 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerDied","Data":"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.592310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-cgm9z" event={"ID":"ba134367-9e72-466a-8aa3-0bda1deb7791","Type":"ContainerDied","Data":"03a28ce5a42adf28e21bd51fb0ee9216c7ab5bdb7d9e843e28d1f210295085a6"} Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.592368 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-cgm9z" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606163 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606250 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606335 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.606403 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") pod \"ba134367-9e72-466a-8aa3-0bda1deb7791\" (UID: \"ba134367-9e72-466a-8aa3-0bda1deb7791\") " Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.613182 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.623055 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jbnzf" podStartSLOduration=1.573929867 podStartE2EDuration="6.623031265s" podCreationTimestamp="2026-01-29 11:15:06 +0000 UTC" firstStartedPulling="2026-01-29 11:15:06.809385144 +0000 UTC m=+972.682419335" lastFinishedPulling="2026-01-29 11:15:11.858486542 +0000 UTC m=+977.731520733" observedRunningTime="2026-01-29 11:15:12.607452329 +0000 UTC m=+978.480486530" watchObservedRunningTime="2026-01-29 11:15:12.623031265 +0000 UTC m=+978.496065456" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.626180 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4" (OuterVolumeSpecName: "kube-api-access-99nm4") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "kube-api-access-99nm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: W0129 11:15:12.640744 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2eab48b_4545_4fa3_81f1_6247ebcf425e.slice/crio-b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876 WatchSource:0}: Error finding container b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876: Status 404 returned error can't find the container with id b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876 Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.657338 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.709011 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99nm4\" (UniqueName: \"kubernetes.io/projected/ba134367-9e72-466a-8aa3-0bda1deb7791-kube-api-access-99nm4\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.786029 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config" (OuterVolumeSpecName: "config") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.792191 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.815503 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.815560 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.816858 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.819189 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba134367-9e72-466a-8aa3-0bda1deb7791" (UID: "ba134367-9e72-466a-8aa3-0bda1deb7791"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.877249 4593 scope.go:117] "RemoveContainer" containerID="af838fa010c8947df25073166fa4b7b48c902b1c9dfcc02609c3d4b2597c538c" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.910747 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.916881 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.916911 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba134367-9e72-466a-8aa3-0bda1deb7791-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.929309 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnrxg"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.940844 4593 scope.go:117] "RemoveContainer" containerID="cd84694d15788663bcca8f1cea58b3f9c8ab044022df23a01ee0a17afa892276" Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.969465 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.975404 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-cgm9z"] Jan 29 11:15:12 crc kubenswrapper[4593]: I0129 11:15:12.980837 4593 scope.go:117] "RemoveContainer" containerID="16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.002390 4593 scope.go:117] "RemoveContainer" containerID="42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.020948 4593 scope.go:117] "RemoveContainer" containerID="16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" Jan 29 11:15:13 crc kubenswrapper[4593]: E0129 11:15:13.021280 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a\": container with ID starting with 16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a not found: ID does not exist" containerID="16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.021309 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a"} err="failed to get container status \"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a\": rpc error: code = NotFound desc = could not find container \"16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a\": container with ID starting with 16c330099663087d1ad14f43dde6f6b5da97e137920d113a4cc68d120af8d43a not found: ID does not exist" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.021335 4593 scope.go:117] "RemoveContainer" containerID="42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c" Jan 29 11:15:13 crc kubenswrapper[4593]: E0129 11:15:13.021704 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c\": container with ID starting with 42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c not found: ID does not exist" containerID="42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.021731 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c"} err="failed to get container status \"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c\": rpc error: code = NotFound desc = could not find container \"42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c\": container with ID starting with 42e3e46a82a979e0d389f47be7049e973bc55893fd804a529a847013351b7e9c not found: ID does not exist" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.085528 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" path="/var/lib/kubelet/pods/ba134367-9e72-466a-8aa3-0bda1deb7791/volumes" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.086403 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" path="/var/lib/kubelet/pods/ba99bea9-cf82-4eb7-8c7b-f171c534fc62/volumes" Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.603084 4593 generic.go:334] "Generic (PLEG): container finished" podID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" containerID="f4b832d6a02cddde771b6eeb4da2b7e8c024cb3a623b350dff1e411d17b9ecfd" exitCode=0 Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.603150 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3a7-account-create-update-9b49r" event={"ID":"f2eab48b-4545-4fa3-81f1-6247ebcf425e","Type":"ContainerDied","Data":"f4b832d6a02cddde771b6eeb4da2b7e8c024cb3a623b350dff1e411d17b9ecfd"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.603177 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3a7-account-create-update-9b49r" event={"ID":"f2eab48b-4545-4fa3-81f1-6247ebcf425e","Type":"ContainerStarted","Data":"b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.605850 4593 generic.go:334] "Generic (PLEG): container finished" podID="3b4524da-e80b-4bd2-a116-061694417007" containerID="b2686e149913ab0d7eb8e1c1ab82711e8bc8d0f1e7c674ad1bb843e01690c119" exitCode=0 Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.605954 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-70b0-account-create-update-c8qbm" event={"ID":"3b4524da-e80b-4bd2-a116-061694417007","Type":"ContainerDied","Data":"b2686e149913ab0d7eb8e1c1ab82711e8bc8d0f1e7c674ad1bb843e01690c119"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.605974 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-70b0-account-create-update-c8qbm" event={"ID":"3b4524da-e80b-4bd2-a116-061694417007","Type":"ContainerStarted","Data":"a0fda54eb084c2cf19c1e6dcbc83a9e09d8417502f27c897188c3a798eb76994"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.608342 4593 generic.go:334] "Generic (PLEG): container finished" podID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" containerID="c00b7731a137cc5e16b524de8c2c6a1402d07e79205488315ad3920c71b523b5" exitCode=0 Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.608382 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c4fzt" event={"ID":"fdb1fb5b-1dc7-487a-b49d-d542eef7af31","Type":"ContainerDied","Data":"c00b7731a137cc5e16b524de8c2c6a1402d07e79205488315ad3920c71b523b5"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.608398 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c4fzt" event={"ID":"fdb1fb5b-1dc7-487a-b49d-d542eef7af31","Type":"ContainerStarted","Data":"61f5eeb49ae22b41c16de9e85095516b89b44d599286692b28762a74f7dca621"} Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.609852 4593 generic.go:334] "Generic (PLEG): container finished" podID="e2687b78-f425-4fae-9af8-7021f3e01e36" containerID="1146c75a258cb4ad7f71cc2e37d3a74813526e1b88d59d1880e58f1ae91dd7d1" exitCode=0 Jan 29 11:15:13 crc kubenswrapper[4593]: I0129 11:15:13.610593 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cjzzm" event={"ID":"e2687b78-f425-4fae-9af8-7021f3e01e36","Type":"ContainerDied","Data":"1146c75a258cb4ad7f71cc2e37d3a74813526e1b88d59d1880e58f1ae91dd7d1"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.107820 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.255698 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.262510 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.271897 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.276299 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") pod \"3b4524da-e80b-4bd2-a116-061694417007\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.276494 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") pod \"3b4524da-e80b-4bd2-a116-061694417007\" (UID: \"3b4524da-e80b-4bd2-a116-061694417007\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.277287 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b4524da-e80b-4bd2-a116-061694417007" (UID: "3b4524da-e80b-4bd2-a116-061694417007"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.282689 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45" (OuterVolumeSpecName: "kube-api-access-zrt45") pod "3b4524da-e80b-4bd2-a116-061694417007" (UID: "3b4524da-e80b-4bd2-a116-061694417007"). InnerVolumeSpecName "kube-api-access-zrt45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") pod \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378215 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") pod \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378265 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") pod \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\" (UID: \"f2eab48b-4545-4fa3-81f1-6247ebcf425e\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378308 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") pod \"e2687b78-f425-4fae-9af8-7021f3e01e36\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") pod \"e2687b78-f425-4fae-9af8-7021f3e01e36\" (UID: \"e2687b78-f425-4fae-9af8-7021f3e01e36\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378383 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") pod \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\" (UID: \"fdb1fb5b-1dc7-487a-b49d-d542eef7af31\") " Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378831 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b4524da-e80b-4bd2-a116-061694417007-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.378853 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrt45\" (UniqueName: \"kubernetes.io/projected/3b4524da-e80b-4bd2-a116-061694417007-kube-api-access-zrt45\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.379012 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2eab48b-4545-4fa3-81f1-6247ebcf425e" (UID: "f2eab48b-4545-4fa3-81f1-6247ebcf425e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.379093 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdb1fb5b-1dc7-487a-b49d-d542eef7af31" (UID: "fdb1fb5b-1dc7-487a-b49d-d542eef7af31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.379595 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2687b78-f425-4fae-9af8-7021f3e01e36" (UID: "e2687b78-f425-4fae-9af8-7021f3e01e36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.382167 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945" (OuterVolumeSpecName: "kube-api-access-qb945") pod "fdb1fb5b-1dc7-487a-b49d-d542eef7af31" (UID: "fdb1fb5b-1dc7-487a-b49d-d542eef7af31"). InnerVolumeSpecName "kube-api-access-qb945". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.382213 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8" (OuterVolumeSpecName: "kube-api-access-zlqs8") pod "f2eab48b-4545-4fa3-81f1-6247ebcf425e" (UID: "f2eab48b-4545-4fa3-81f1-6247ebcf425e"). InnerVolumeSpecName "kube-api-access-zlqs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.382559 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz" (OuterVolumeSpecName: "kube-api-access-spkhz") pod "e2687b78-f425-4fae-9af8-7021f3e01e36" (UID: "e2687b78-f425-4fae-9af8-7021f3e01e36"). InnerVolumeSpecName "kube-api-access-spkhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481242 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlqs8\" (UniqueName: \"kubernetes.io/projected/f2eab48b-4545-4fa3-81f1-6247ebcf425e-kube-api-access-zlqs8\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481280 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spkhz\" (UniqueName: \"kubernetes.io/projected/e2687b78-f425-4fae-9af8-7021f3e01e36-kube-api-access-spkhz\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481293 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2687b78-f425-4fae-9af8-7021f3e01e36-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481304 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb945\" (UniqueName: \"kubernetes.io/projected/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-kube-api-access-qb945\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481315 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2eab48b-4545-4fa3-81f1-6247ebcf425e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.481325 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdb1fb5b-1dc7-487a-b49d-d542eef7af31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.626707 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-c4fzt" event={"ID":"fdb1fb5b-1dc7-487a-b49d-d542eef7af31","Type":"ContainerDied","Data":"61f5eeb49ae22b41c16de9e85095516b89b44d599286692b28762a74f7dca621"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.626758 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f5eeb49ae22b41c16de9e85095516b89b44d599286692b28762a74f7dca621" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.626740 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-c4fzt" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.628913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cjzzm" event={"ID":"e2687b78-f425-4fae-9af8-7021f3e01e36","Type":"ContainerDied","Data":"69543955059b6a02d7efbea367354349bec1818ede0d3acfb63fa9c3aa6c1a0a"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.628931 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cjzzm" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.628935 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69543955059b6a02d7efbea367354349bec1818ede0d3acfb63fa9c3aa6c1a0a" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.630901 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerID="44978dbad6338f76a863bda910ccc44233b86b74e07d252f43136dd31d7cd624" exitCode=0 Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.630961 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerDied","Data":"44978dbad6338f76a863bda910ccc44233b86b74e07d252f43136dd31d7cd624"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.636058 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c3a7-account-create-update-9b49r" event={"ID":"f2eab48b-4545-4fa3-81f1-6247ebcf425e","Type":"ContainerDied","Data":"b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.636115 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b82eec590832523688db0a6968a160c841e0e9d79bb0cf3ff1d1a27dc55df876" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.636186 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c3a7-account-create-update-9b49r" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.641149 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-70b0-account-create-update-c8qbm" event={"ID":"3b4524da-e80b-4bd2-a116-061694417007","Type":"ContainerDied","Data":"a0fda54eb084c2cf19c1e6dcbc83a9e09d8417502f27c897188c3a798eb76994"} Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.641201 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0fda54eb084c2cf19c1e6dcbc83a9e09d8417502f27c897188c3a798eb76994" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.641271 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-70b0-account-create-update-c8qbm" Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.911357 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:15 crc kubenswrapper[4593]: I0129 11:15:15.917499 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-87bhd"] Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.005798 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006112 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="extract-content" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006127 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="extract-content" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006136 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006142 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006152 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006159 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006169 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2687b78-f425-4fae-9af8-7021f3e01e36" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006174 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2687b78-f425-4fae-9af8-7021f3e01e36" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006185 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006190 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006201 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006207 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006219 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="extract-utilities" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006224 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="extract-utilities" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006242 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="init" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006248 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="init" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006258 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4524da-e80b-4bd2-a116-061694417007" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006266 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4524da-e80b-4bd2-a116-061694417007" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: E0129 11:15:16.006276 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006281 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006414 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2687b78-f425-4fae-9af8-7021f3e01e36" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006423 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006436 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba99bea9-cf82-4eb7-8c7b-f171c534fc62" containerName="registry-server" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006446 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba134367-9e72-466a-8aa3-0bda1deb7791" containerName="dnsmasq-dns" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006457 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006466 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4524da-e80b-4bd2-a116-061694417007" containerName="mariadb-account-create-update" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006474 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" containerName="mariadb-database-create" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.006971 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.009082 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.021146 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.090199 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.090259 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.191325 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.191719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.192329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.210862 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") pod \"root-account-create-update-sj2mz\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.349262 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.651232 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerStarted","Data":"b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0"} Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.651681 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.652863 4593 generic.go:334] "Generic (PLEG): container finished" podID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerID="6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f" exitCode=0 Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.652895 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerDied","Data":"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f"} Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.711688 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.528156822 podStartE2EDuration="1m12.711673473s" podCreationTimestamp="2026-01-29 11:14:04 +0000 UTC" firstStartedPulling="2026-01-29 11:14:06.655265118 +0000 UTC m=+912.528299309" lastFinishedPulling="2026-01-29 11:14:40.838781769 +0000 UTC m=+946.711815960" observedRunningTime="2026-01-29 11:15:16.706177126 +0000 UTC m=+982.579211337" watchObservedRunningTime="2026-01-29 11:15:16.711673473 +0000 UTC m=+982.584707654" Jan 29 11:15:16 crc kubenswrapper[4593]: I0129 11:15:16.862693 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:16 crc kubenswrapper[4593]: W0129 11:15:16.875030 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod757d1461_f6a2_4062_be74_0abc5c507af2.slice/crio-78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd WatchSource:0}: Error finding container 78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd: Status 404 returned error can't find the container with id 78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.083670 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a9eb9e-18f2-4150-973c-2e7baaca3484" path="/var/lib/kubelet/pods/d8a9eb9e-18f2-4150-973c-2e7baaca3484/volumes" Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.662859 4593 generic.go:334] "Generic (PLEG): container finished" podID="757d1461-f6a2-4062-be74-0abc5c507af2" containerID="b731ce61732546e5002e6093b39d4676cefa4ead9d8427f5427a357a3a10832e" exitCode=0 Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.662899 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sj2mz" event={"ID":"757d1461-f6a2-4062-be74-0abc5c507af2","Type":"ContainerDied","Data":"b731ce61732546e5002e6093b39d4676cefa4ead9d8427f5427a357a3a10832e"} Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.663346 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sj2mz" event={"ID":"757d1461-f6a2-4062-be74-0abc5c507af2","Type":"ContainerStarted","Data":"78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd"} Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.665081 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerStarted","Data":"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112"} Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.665403 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:15:17 crc kubenswrapper[4593]: I0129 11:15:17.727759 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.81662394 podStartE2EDuration="1m13.727740398s" podCreationTimestamp="2026-01-29 11:14:04 +0000 UTC" firstStartedPulling="2026-01-29 11:14:06.965415674 +0000 UTC m=+912.838449865" lastFinishedPulling="2026-01-29 11:14:41.876532132 +0000 UTC m=+947.749566323" observedRunningTime="2026-01-29 11:15:17.725374765 +0000 UTC m=+983.598408976" watchObservedRunningTime="2026-01-29 11:15:17.727740398 +0000 UTC m=+983.600774589" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.125794 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:18 crc kubenswrapper[4593]: E0129 11:15:18.126053 4593 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 29 11:15:18 crc kubenswrapper[4593]: E0129 11:15:18.126080 4593 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 29 11:15:18 crc kubenswrapper[4593]: E0129 11:15:18.127161 4593 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift podName:307ad072-fdfc-4c55-8891-bc041d755b1a nodeName:}" failed. No retries permitted until 2026-01-29 11:15:34.12690255 +0000 UTC m=+999.999936741 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift") pod "swift-storage-0" (UID: "307ad072-fdfc-4c55-8891-bc041d755b1a") : configmap "swift-ring-files" not found Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.440607 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.441710 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.457662 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.534557 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.534946 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.567051 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.568222 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.570560 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.594048 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636033 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636123 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636143 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636206 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.636948 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.653739 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") pod \"keystone-db-create-pz4nl\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.737864 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.738100 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.739745 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.759342 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.761283 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") pod \"keystone-b99c-account-create-update-49grn\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:18 crc kubenswrapper[4593]: I0129 11:15:18.887827 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.142981 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.166948 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.248340 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.248966 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") pod \"757d1461-f6a2-4062-be74-0abc5c507af2\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.249145 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") pod \"757d1461-f6a2-4062-be74-0abc5c507af2\" (UID: \"757d1461-f6a2-4062-be74-0abc5c507af2\") " Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.258233 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72" (OuterVolumeSpecName: "kube-api-access-rvb72") pod "757d1461-f6a2-4062-be74-0abc5c507af2" (UID: "757d1461-f6a2-4062-be74-0abc5c507af2"). InnerVolumeSpecName "kube-api-access-rvb72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.258563 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "757d1461-f6a2-4062-be74-0abc5c507af2" (UID: "757d1461-f6a2-4062-be74-0abc5c507af2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.354654 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvb72\" (UniqueName: \"kubernetes.io/projected/757d1461-f6a2-4062-be74-0abc5c507af2-kube-api-access-rvb72\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.354699 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/757d1461-f6a2-4062-be74-0abc5c507af2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.618817 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:15:19 crc kubenswrapper[4593]: W0129 11:15:19.629435 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12899826_03ea_4b37_b523_74946fd54dee.slice/crio-a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae WatchSource:0}: Error finding container a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae: Status 404 returned error can't find the container with id a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.679122 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b99c-account-create-update-49grn" event={"ID":"12899826-03ea-4b37-b523-74946fd54dee","Type":"ContainerStarted","Data":"a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae"} Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.680797 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sj2mz" event={"ID":"757d1461-f6a2-4062-be74-0abc5c507af2","Type":"ContainerDied","Data":"78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd"} Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.680838 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c78f5855371060dcd14be295bcb065c887a6427f808133dd98c7f1ca4d66dd" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.680842 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sj2mz" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.700197 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pz4nl" event={"ID":"a84071c3-9564-41ef-b38f-fd40e1403fa8","Type":"ContainerStarted","Data":"2e1d0fad53de84474f89284c6a88dc3a72dfb695af32b237f2378dd7177ae8c5"} Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.700246 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pz4nl" event={"ID":"a84071c3-9564-41ef-b38f-fd40e1403fa8","Type":"ContainerStarted","Data":"7edbe171478325ecdd7fbb56c02ea4d91fc80a6acf8ee4d5d37e9f6cbb0c7f50"} Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.747752 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-pz4nl" podStartSLOduration=1.747730082 podStartE2EDuration="1.747730082s" podCreationTimestamp="2026-01-29 11:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:19.743622304 +0000 UTC m=+985.616656495" watchObservedRunningTime="2026-01-29 11:15:19.747730082 +0000 UTC m=+985.620764273" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.770353 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:15:19 crc kubenswrapper[4593]: E0129 11:15:19.770916 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757d1461-f6a2-4062-be74-0abc5c507af2" containerName="mariadb-account-create-update" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.770933 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="757d1461-f6a2-4062-be74-0abc5c507af2" containerName="mariadb-account-create-update" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.771107 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="757d1461-f6a2-4062-be74-0abc5c507af2" containerName="mariadb-account-create-update" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.771616 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.775103 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lfv28" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.787583 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.789080 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.864784 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.864839 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.864867 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.864961 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.966918 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.966987 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.967026 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.967149 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.973421 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.973724 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.984559 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:19 crc kubenswrapper[4593]: I0129 11:15:19.986453 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") pod \"glance-db-sync-db54x\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " pod="openstack/glance-db-sync-db54x" Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.097092 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-db54x" Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.718085 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b99c-account-create-update-49grn" event={"ID":"12899826-03ea-4b37-b523-74946fd54dee","Type":"ContainerDied","Data":"cfeb01d9eafd6f66b4b9db53f4dc0ef8f8de91ea87a6bf0dc6e1a2b4cfb6bce8"} Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.719532 4593 generic.go:334] "Generic (PLEG): container finished" podID="12899826-03ea-4b37-b523-74946fd54dee" containerID="cfeb01d9eafd6f66b4b9db53f4dc0ef8f8de91ea87a6bf0dc6e1a2b4cfb6bce8" exitCode=0 Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.721842 4593 generic.go:334] "Generic (PLEG): container finished" podID="4d1e7e96-e120-43f1-bff0-ea3d624e621b" containerID="9ea8033b0ead06e96b066f4d434b2b21ca12373b475b3c1f489d3e7beb1ea468" exitCode=0 Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.721945 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jbnzf" event={"ID":"4d1e7e96-e120-43f1-bff0-ea3d624e621b","Type":"ContainerDied","Data":"9ea8033b0ead06e96b066f4d434b2b21ca12373b475b3c1f489d3e7beb1ea468"} Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.724603 4593 generic.go:334] "Generic (PLEG): container finished" podID="a84071c3-9564-41ef-b38f-fd40e1403fa8" containerID="2e1d0fad53de84474f89284c6a88dc3a72dfb695af32b237f2378dd7177ae8c5" exitCode=0 Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.724775 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pz4nl" event={"ID":"a84071c3-9564-41ef-b38f-fd40e1403fa8","Type":"ContainerDied","Data":"2e1d0fad53de84474f89284c6a88dc3a72dfb695af32b237f2378dd7177ae8c5"} Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.742065 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cc9qq" podUID="df5842a4-132b-4c71-a970-efe4f00a3827" containerName="ovn-controller" probeResult="failure" output=< Jan 29 11:15:20 crc kubenswrapper[4593]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 11:15:20 crc kubenswrapper[4593]: > Jan 29 11:15:20 crc kubenswrapper[4593]: W0129 11:15:20.789451 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6bbbb39_f79c_4647_976b_6225ac21e63b.slice/crio-75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59 WatchSource:0}: Error finding container 75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59: Status 404 returned error can't find the container with id 75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59 Jan 29 11:15:20 crc kubenswrapper[4593]: I0129 11:15:20.808253 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:15:21 crc kubenswrapper[4593]: I0129 11:15:21.735650 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-db54x" event={"ID":"a6bbbb39-f79c-4647-976b-6225ac21e63b","Type":"ContainerStarted","Data":"75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59"} Jan 29 11:15:21 crc kubenswrapper[4593]: I0129 11:15:21.914367 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:15:21 crc kubenswrapper[4593]: I0129 11:15:21.917754 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:21 crc kubenswrapper[4593]: I0129 11:15:21.933314 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.027091 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.027161 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.027365 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.129003 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.129426 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.129524 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.130251 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.130278 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.172293 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") pod \"certified-operators-4q5nh\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.246129 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.358964 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.373012 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sj2mz"] Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.412137 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.451589 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.515417 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.539374 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") pod \"12899826-03ea-4b37-b523-74946fd54dee\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.539457 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") pod \"a84071c3-9564-41ef-b38f-fd40e1403fa8\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.539623 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") pod \"a84071c3-9564-41ef-b38f-fd40e1403fa8\" (UID: \"a84071c3-9564-41ef-b38f-fd40e1403fa8\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.539700 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") pod \"12899826-03ea-4b37-b523-74946fd54dee\" (UID: \"12899826-03ea-4b37-b523-74946fd54dee\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.541004 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12899826-03ea-4b37-b523-74946fd54dee" (UID: "12899826-03ea-4b37-b523-74946fd54dee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.542438 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a84071c3-9564-41ef-b38f-fd40e1403fa8" (UID: "a84071c3-9564-41ef-b38f-fd40e1403fa8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.549725 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2" (OuterVolumeSpecName: "kube-api-access-sjlt2") pod "a84071c3-9564-41ef-b38f-fd40e1403fa8" (UID: "a84071c3-9564-41ef-b38f-fd40e1403fa8"). InnerVolumeSpecName "kube-api-access-sjlt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.551087 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn" (OuterVolumeSpecName: "kube-api-access-gn4dn") pod "12899826-03ea-4b37-b523-74946fd54dee" (UID: "12899826-03ea-4b37-b523-74946fd54dee"). InnerVolumeSpecName "kube-api-access-gn4dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641162 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641245 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641292 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641392 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641438 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641489 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641520 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") pod \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\" (UID: \"4d1e7e96-e120-43f1-bff0-ea3d624e621b\") " Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641973 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjlt2\" (UniqueName: \"kubernetes.io/projected/a84071c3-9564-41ef-b38f-fd40e1403fa8-kube-api-access-sjlt2\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.641997 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12899826-03ea-4b37-b523-74946fd54dee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.642009 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn4dn\" (UniqueName: \"kubernetes.io/projected/12899826-03ea-4b37-b523-74946fd54dee-kube-api-access-gn4dn\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.642368 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a84071c3-9564-41ef-b38f-fd40e1403fa8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.643505 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.644618 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.646929 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf" (OuterVolumeSpecName: "kube-api-access-k8mgf") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "kube-api-access-k8mgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.661457 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.692387 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.704666 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.721909 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts" (OuterVolumeSpecName: "scripts") pod "4d1e7e96-e120-43f1-bff0-ea3d624e621b" (UID: "4d1e7e96-e120-43f1-bff0-ea3d624e621b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.743247 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b99c-account-create-update-49grn" event={"ID":"12899826-03ea-4b37-b523-74946fd54dee","Type":"ContainerDied","Data":"a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae"} Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.743288 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7bf5d9ebc45e57b0ac3831f0b09f843f4fb95ed8073f4c501619971835c65ae" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.743363 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b99c-account-create-update-49grn" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744235 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8mgf\" (UniqueName: \"kubernetes.io/projected/4d1e7e96-e120-43f1-bff0-ea3d624e621b-kube-api-access-k8mgf\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744256 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744266 4593 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744276 4593 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744284 4593 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/4d1e7e96-e120-43f1-bff0-ea3d624e621b-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744292 4593 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/4d1e7e96-e120-43f1-bff0-ea3d624e621b-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.744300 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d1e7e96-e120-43f1-bff0-ea3d624e621b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.745765 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jbnzf" event={"ID":"4d1e7e96-e120-43f1-bff0-ea3d624e621b","Type":"ContainerDied","Data":"d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6"} Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.745788 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d77f0fd952398dea26e9f4a4bd94e337070014de0b7d5f082920e95b0dabccb6" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.745820 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jbnzf" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.755495 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-pz4nl" event={"ID":"a84071c3-9564-41ef-b38f-fd40e1403fa8","Type":"ContainerDied","Data":"7edbe171478325ecdd7fbb56c02ea4d91fc80a6acf8ee4d5d37e9f6cbb0c7f50"} Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.755541 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7edbe171478325ecdd7fbb56c02ea4d91fc80a6acf8ee4d5d37e9f6cbb0c7f50" Jan 29 11:15:22 crc kubenswrapper[4593]: I0129 11:15:22.755600 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-pz4nl" Jan 29 11:15:23 crc kubenswrapper[4593]: I0129 11:15:23.085088 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757d1461-f6a2-4062-be74-0abc5c507af2" path="/var/lib/kubelet/pods/757d1461-f6a2-4062-be74-0abc5c507af2/volumes" Jan 29 11:15:23 crc kubenswrapper[4593]: I0129 11:15:23.127204 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:15:23 crc kubenswrapper[4593]: I0129 11:15:23.770702 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerStarted","Data":"78dbfe42e92421682419cdaea165d73392eb4f589d0fece85d9b2c89989dd32e"} Jan 29 11:15:24 crc kubenswrapper[4593]: I0129 11:15:24.785268 4593 generic.go:334] "Generic (PLEG): container finished" podID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerID="c6e6f1ac55c53b64f5a8d09aab84fcbf98dc6146a8ab819b2f4a3c9dfdc9a62a" exitCode=0 Jan 29 11:15:24 crc kubenswrapper[4593]: I0129 11:15:24.785453 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerDied","Data":"c6e6f1ac55c53b64f5a8d09aab84fcbf98dc6146a8ab819b2f4a3c9dfdc9a62a"} Jan 29 11:15:25 crc kubenswrapper[4593]: I0129 11:15:25.735667 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-cc9qq" podUID="df5842a4-132b-4c71-a970-efe4f00a3827" containerName="ovn-controller" probeResult="failure" output=< Jan 29 11:15:25 crc kubenswrapper[4593]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 29 11:15:25 crc kubenswrapper[4593]: > Jan 29 11:15:25 crc kubenswrapper[4593]: I0129 11:15:25.801564 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:15:25 crc kubenswrapper[4593]: I0129 11:15:25.816817 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 29 11:15:25 crc kubenswrapper[4593]: I0129 11:15:25.837338 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-x49lj" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.077502 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:26 crc kubenswrapper[4593]: E0129 11:15:26.077885 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e7e96-e120-43f1-bff0-ea3d624e621b" containerName="swift-ring-rebalance" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.077902 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e7e96-e120-43f1-bff0-ea3d624e621b" containerName="swift-ring-rebalance" Jan 29 11:15:26 crc kubenswrapper[4593]: E0129 11:15:26.077918 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84071c3-9564-41ef-b38f-fd40e1403fa8" containerName="mariadb-database-create" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.077925 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84071c3-9564-41ef-b38f-fd40e1403fa8" containerName="mariadb-database-create" Jan 29 11:15:26 crc kubenswrapper[4593]: E0129 11:15:26.077937 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12899826-03ea-4b37-b523-74946fd54dee" containerName="mariadb-account-create-update" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.077944 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="12899826-03ea-4b37-b523-74946fd54dee" containerName="mariadb-account-create-update" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.078081 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1e7e96-e120-43f1-bff0-ea3d624e621b" containerName="swift-ring-rebalance" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.078093 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84071c3-9564-41ef-b38f-fd40e1403fa8" containerName="mariadb-database-create" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.078106 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="12899826-03ea-4b37-b523-74946fd54dee" containerName="mariadb-account-create-update" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.080010 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.090706 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.092947 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.218882 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.218931 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.219033 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.219130 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.219155 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.219200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.264042 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.321139 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.321963 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322141 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322169 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322218 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322288 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322314 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322811 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322863 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.322862 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.324626 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.354979 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") pod \"ovn-controller-cc9qq-config-tbd2h\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.397227 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.810716 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerStarted","Data":"26d8db7acae03adbd8a96b95ffa16e626d4c4da2a6d0ab63963a1ab8a16e14e7"} Jan 29 11:15:26 crc kubenswrapper[4593]: I0129 11:15:26.902824 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:26 crc kubenswrapper[4593]: W0129 11:15:26.912994 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6405a039_ae6d_4255_891c_ef8452e19df3.slice/crio-a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e WatchSource:0}: Error finding container a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e: Status 404 returned error can't find the container with id a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.372499 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.374013 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.376583 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.380626 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.442551 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.442749 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.544339 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.544472 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.545406 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.565286 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") pod \"root-account-create-update-625ls\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.693429 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-625ls" Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.826194 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq-config-tbd2h" event={"ID":"6405a039-ae6d-4255-891c-ef8452e19df3","Type":"ContainerStarted","Data":"bb01aea62e7547286b44d9743a913549a411ace53ed9b60fd827a2aca107007a"} Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.826504 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq-config-tbd2h" event={"ID":"6405a039-ae6d-4255-891c-ef8452e19df3","Type":"ContainerStarted","Data":"a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e"} Jan 29 11:15:27 crc kubenswrapper[4593]: I0129 11:15:27.856045 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-cc9qq-config-tbd2h" podStartSLOduration=1.856019178 podStartE2EDuration="1.856019178s" podCreationTimestamp="2026-01-29 11:15:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:27.842439886 +0000 UTC m=+993.715474077" watchObservedRunningTime="2026-01-29 11:15:27.856019178 +0000 UTC m=+993.729053369" Jan 29 11:15:28 crc kubenswrapper[4593]: I0129 11:15:28.191813 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:15:28 crc kubenswrapper[4593]: I0129 11:15:28.835495 4593 generic.go:334] "Generic (PLEG): container finished" podID="6405a039-ae6d-4255-891c-ef8452e19df3" containerID="bb01aea62e7547286b44d9743a913549a411ace53ed9b60fd827a2aca107007a" exitCode=0 Jan 29 11:15:28 crc kubenswrapper[4593]: I0129 11:15:28.835580 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq-config-tbd2h" event={"ID":"6405a039-ae6d-4255-891c-ef8452e19df3","Type":"ContainerDied","Data":"bb01aea62e7547286b44d9743a913549a411ace53ed9b60fd827a2aca107007a"} Jan 29 11:15:28 crc kubenswrapper[4593]: I0129 11:15:28.837460 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-625ls" event={"ID":"56d59502-9350-4842-bd01-35d55f0b47fa","Type":"ContainerStarted","Data":"0fe30972eae6fe027a2826fd5f842e093abe225a13c6181792f977be2efdbfe1"} Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.715751 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.717773 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.727193 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.816153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.816193 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.816240 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.853902 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-625ls" event={"ID":"56d59502-9350-4842-bd01-35d55f0b47fa","Type":"ContainerStarted","Data":"18ec4b46dd2b143a4699e4f0f9fb21bf0908d4fea6194256ca5d46a4b1e3154b"} Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.874216 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-625ls" podStartSLOduration=2.874198575 podStartE2EDuration="2.874198575s" podCreationTimestamp="2026-01-29 11:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:29.873492166 +0000 UTC m=+995.746526357" watchObservedRunningTime="2026-01-29 11:15:29.874198575 +0000 UTC m=+995.747232766" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.918135 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.918178 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.918221 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.918838 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.919535 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:29 crc kubenswrapper[4593]: I0129 11:15:29.941199 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") pod \"redhat-operators-k4l8n\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:30 crc kubenswrapper[4593]: I0129 11:15:30.053012 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:15:30 crc kubenswrapper[4593]: I0129 11:15:30.725499 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-cc9qq" Jan 29 11:15:30 crc kubenswrapper[4593]: I0129 11:15:30.862807 4593 generic.go:334] "Generic (PLEG): container finished" podID="56d59502-9350-4842-bd01-35d55f0b47fa" containerID="18ec4b46dd2b143a4699e4f0f9fb21bf0908d4fea6194256ca5d46a4b1e3154b" exitCode=0 Jan 29 11:15:30 crc kubenswrapper[4593]: I0129 11:15:30.862847 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-625ls" event={"ID":"56d59502-9350-4842-bd01-35d55f0b47fa","Type":"ContainerDied","Data":"18ec4b46dd2b143a4699e4f0f9fb21bf0908d4fea6194256ca5d46a4b1e3154b"} Jan 29 11:15:31 crc kubenswrapper[4593]: I0129 11:15:31.871539 4593 generic.go:334] "Generic (PLEG): container finished" podID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerID="26d8db7acae03adbd8a96b95ffa16e626d4c4da2a6d0ab63963a1ab8a16e14e7" exitCode=0 Jan 29 11:15:31 crc kubenswrapper[4593]: I0129 11:15:31.871791 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerDied","Data":"26d8db7acae03adbd8a96b95ffa16e626d4c4da2a6d0ab63963a1ab8a16e14e7"} Jan 29 11:15:33 crc kubenswrapper[4593]: I0129 11:15:33.946057 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:15:33 crc kubenswrapper[4593]: I0129 11:15:33.946864 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:15:34 crc kubenswrapper[4593]: I0129 11:15:34.195250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:34 crc kubenswrapper[4593]: I0129 11:15:34.202074 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/307ad072-fdfc-4c55-8891-bc041d755b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"307ad072-fdfc-4c55-8891-bc041d755b1a\") " pod="openstack/swift-storage-0" Jan 29 11:15:34 crc kubenswrapper[4593]: I0129 11:15:34.388206 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 29 11:15:35 crc kubenswrapper[4593]: I0129 11:15:35.819455 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.136033 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.149258 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.188451 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.240960 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.241098 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.265567 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.343046 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.343165 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.343834 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.366107 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.370382 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.376145 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.377205 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") pod \"barbican-db-create-vdz52\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.386616 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.493458 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.525131 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.526364 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.551553 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.551621 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.584187 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.639238 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.640427 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.643752 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.645709 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.645897 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.646029 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.653960 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.654072 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.654113 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.654155 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.654793 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.684973 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.692491 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") pod \"barbican-0486-account-create-update-f9r68\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.695456 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.696488 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.715080 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.752380 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.757951 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758016 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758049 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758066 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758089 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.758812 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.783777 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.808721 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.810429 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") pod \"cinder-db-create-9hskn\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.811951 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.837837 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.840988 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859810 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859864 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859882 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859939 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.859964 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.870515 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.878872 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.889419 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") pod \"keystone-db-sync-wzm6z\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.960257 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.961184 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.961250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.961305 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.961333 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.962173 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.962505 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.970775 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.973255 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.981970 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:15:36 crc kubenswrapper[4593]: I0129 11:15:36.997092 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") pod \"cinder-4c8a-account-create-update-psrpm\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.057343 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.063254 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.063400 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.063469 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.063968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.064737 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.081056 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") pod \"neutron-db-create-jgv94\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.154796 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.165772 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.165861 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.166492 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.186822 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") pod \"neutron-140c-account-create-update-csqgp\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:37 crc kubenswrapper[4593]: I0129 11:15:37.286107 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.839415 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.853251 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-625ls" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894537 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894583 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894794 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894868 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894896 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.894917 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") pod \"6405a039-ae6d-4255-891c-ef8452e19df3\" (UID: \"6405a039-ae6d-4255-891c-ef8452e19df3\") " Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.895347 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.895398 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.896711 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts" (OuterVolumeSpecName: "scripts") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.896745 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run" (OuterVolumeSpecName: "var-run") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.906384 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.922076 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc" (OuterVolumeSpecName: "kube-api-access-m28jc") pod "6405a039-ae6d-4255-891c-ef8452e19df3" (UID: "6405a039-ae6d-4255-891c-ef8452e19df3"). InnerVolumeSpecName "kube-api-access-m28jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.993024 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-625ls" event={"ID":"56d59502-9350-4842-bd01-35d55f0b47fa","Type":"ContainerDied","Data":"0fe30972eae6fe027a2826fd5f842e093abe225a13c6181792f977be2efdbfe1"} Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.993068 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe30972eae6fe027a2826fd5f842e093abe225a13c6181792f977be2efdbfe1" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.993141 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-625ls" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.995786 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-cc9qq-config-tbd2h" event={"ID":"6405a039-ae6d-4255-891c-ef8452e19df3","Type":"ContainerDied","Data":"a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e"} Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.995812 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a351d8b59f6d9bf56172509fb205e45b06b6feb5fd43ab7a09b461eb2ac5e62e" Jan 29 11:15:38 crc kubenswrapper[4593]: I0129 11:15:38.995861 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-cc9qq-config-tbd2h" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.000244 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") pod \"56d59502-9350-4842-bd01-35d55f0b47fa\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.000439 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") pod \"56d59502-9350-4842-bd01-35d55f0b47fa\" (UID: \"56d59502-9350-4842-bd01-35d55f0b47fa\") " Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.000969 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m28jc\" (UniqueName: \"kubernetes.io/projected/6405a039-ae6d-4255-891c-ef8452e19df3-kube-api-access-m28jc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.000988 4593 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.001000 4593 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.001011 4593 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.001024 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6405a039-ae6d-4255-891c-ef8452e19df3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.001035 4593 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6405a039-ae6d-4255-891c-ef8452e19df3-var-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.006068 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56d59502-9350-4842-bd01-35d55f0b47fa" (UID: "56d59502-9350-4842-bd01-35d55f0b47fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.025526 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx" (OuterVolumeSpecName: "kube-api-access-mckcx") pod "56d59502-9350-4842-bd01-35d55f0b47fa" (UID: "56d59502-9350-4842-bd01-35d55f0b47fa"). InnerVolumeSpecName "kube-api-access-mckcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:39 crc kubenswrapper[4593]: E0129 11:15:39.029864 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 29 11:15:39 crc kubenswrapper[4593]: E0129 11:15:39.033775 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z4lrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-db54x_openstack(a6bbbb39-f79c-4647-976b-6225ac21e63b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:15:39 crc kubenswrapper[4593]: E0129 11:15:39.034881 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-db54x" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.103855 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56d59502-9350-4842-bd01-35d55f0b47fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.104082 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mckcx\" (UniqueName: \"kubernetes.io/projected/56d59502-9350-4842-bd01-35d55f0b47fa-kube-api-access-mckcx\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.407539 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.714620 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:15:39 crc kubenswrapper[4593]: W0129 11:15:39.725806 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c572c7d_971f_4f21_81cf_f5d5f7d5d9fe.slice/crio-ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d WatchSource:0}: Error finding container ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d: Status 404 returned error can't find the container with id ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.769309 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:15:39 crc kubenswrapper[4593]: I0129 11:15:39.858454 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:15:39 crc kubenswrapper[4593]: W0129 11:15:39.883084 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d46f220_cb33_4768_91f5_c59e98c41af4.slice/crio-a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db WatchSource:0}: Error finding container a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db: Status 404 returned error can't find the container with id a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.019585 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-140c-account-create-update-csqgp" event={"ID":"1ef7a572-9631-4078-a6ed-419d2a4dfdf9","Type":"ContainerStarted","Data":"81b776500c98b0a9276a4f2e3935ca69f3a82dbb87538e400d856f7bf4e5802a"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.029889 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerStarted","Data":"40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.032399 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410" exitCode=0 Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.032459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.032483 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"5ea6d9d61fd2cf95d30b451aea020cc55aa6add991037bc5209ce7d2a046ef7e"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.051334 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0486-account-create-update-f9r68" event={"ID":"6d46f220-cb33-4768-91f5-c59e98c41af4","Type":"ContainerStarted","Data":"a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db"} Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.062088 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9hskn" event={"ID":"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe","Type":"ContainerStarted","Data":"ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d"} Jan 29 11:15:40 crc kubenswrapper[4593]: E0129 11:15:40.062923 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-db54x" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.082768 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.092919 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4q5nh" podStartSLOduration=4.660458898 podStartE2EDuration="19.092897018s" podCreationTimestamp="2026-01-29 11:15:21 +0000 UTC" firstStartedPulling="2026-01-29 11:15:24.787839432 +0000 UTC m=+990.660873623" lastFinishedPulling="2026-01-29 11:15:39.220277552 +0000 UTC m=+1005.093311743" observedRunningTime="2026-01-29 11:15:40.075113045 +0000 UTC m=+1005.948147236" watchObservedRunningTime="2026-01-29 11:15:40.092897018 +0000 UTC m=+1005.965931199" Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.105830 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-cc9qq-config-tbd2h"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.113010 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.187902 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.224042 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.235017 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:15:40 crc kubenswrapper[4593]: I0129 11:15:40.304251 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.073791 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wzm6z" event={"ID":"9c0b4a25-540c-47dd-96fb-fdc6872721b5","Type":"ContainerStarted","Data":"19987dc4123000c07157f5b274ec3539c6844f271738b4bce8683858a4a97786"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.080249 4593 generic.go:334] "Generic (PLEG): container finished" podID="6d46f220-cb33-4768-91f5-c59e98c41af4" containerID="db6e520018218e0ecd1d4a8d69f63a0e96eea393f5e0abbccf345503319fb4c2" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.084180 4593 generic.go:334] "Generic (PLEG): container finished" podID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" containerID="43d82ed1472c3625ce9296a41e8408518af652ca97d81bd779f6e88331c78c4e" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.113465 4593 generic.go:334] "Generic (PLEG): container finished" podID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" containerID="8daab26085422d8b821fec9dd8845576bd1f7996b7bd02a206e4ec1ed954891a" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.137423 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6405a039-ae6d-4255-891c-ef8452e19df3" path="/var/lib/kubelet/pods/6405a039-ae6d-4255-891c-ef8452e19df3/volumes" Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138084 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"0d0755f7783c5a3fce0e7aaeb9ebf8fc5a1b0ef602a35a7fd8d076194eb911a5"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138115 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0486-account-create-update-f9r68" event={"ID":"6d46f220-cb33-4768-91f5-c59e98c41af4","Type":"ContainerDied","Data":"db6e520018218e0ecd1d4a8d69f63a0e96eea393f5e0abbccf345503319fb4c2"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138128 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9hskn" event={"ID":"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe","Type":"ContainerDied","Data":"43d82ed1472c3625ce9296a41e8408518af652ca97d81bd779f6e88331c78c4e"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138142 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4c8a-account-create-update-psrpm" event={"ID":"fbee97db-a8f1-43e0-ac0b-ec58529b2c03","Type":"ContainerDied","Data":"8daab26085422d8b821fec9dd8845576bd1f7996b7bd02a206e4ec1ed954891a"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.138153 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4c8a-account-create-update-psrpm" event={"ID":"fbee97db-a8f1-43e0-ac0b-ec58529b2c03","Type":"ContainerStarted","Data":"52c3d566be62f7b3d906eb419cd5398b1f874dac4318e3b655d95285b1760187"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.143503 4593 generic.go:334] "Generic (PLEG): container finished" podID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" containerID="d302776b71ae9de08283f287bc6180cc80cb27e0867558e7d6ef7199f716f657" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.143586 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-140c-account-create-update-csqgp" event={"ID":"1ef7a572-9631-4078-a6ed-419d2a4dfdf9","Type":"ContainerDied","Data":"d302776b71ae9de08283f287bc6180cc80cb27e0867558e7d6ef7199f716f657"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.146008 4593 generic.go:334] "Generic (PLEG): container finished" podID="52b59817-1d9d-431d-8055-cf98107b89a2" containerID="26e9d793caead0da7c6fbe2d2cc88998f753f02199ec672516904069fc61c2fc" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.146067 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdz52" event={"ID":"52b59817-1d9d-431d-8055-cf98107b89a2","Type":"ContainerDied","Data":"26e9d793caead0da7c6fbe2d2cc88998f753f02199ec672516904069fc61c2fc"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.146082 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdz52" event={"ID":"52b59817-1d9d-431d-8055-cf98107b89a2","Type":"ContainerStarted","Data":"b785ccbd805876d6971e08b5433aca3992b45b4e6be43abcc2d0897531f24fb0"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.149253 4593 generic.go:334] "Generic (PLEG): container finished" podID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" containerID="9d37cf9a7f03d5742ea9e7314623a8e8f189e15526f469c97b71739526cfc70b" exitCode=0 Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.149277 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jgv94" event={"ID":"115d89c5-8038-4b55-9f1d-d0f169ee0b53","Type":"ContainerDied","Data":"9d37cf9a7f03d5742ea9e7314623a8e8f189e15526f469c97b71739526cfc70b"} Jan 29 11:15:41 crc kubenswrapper[4593]: I0129 11:15:41.149292 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jgv94" event={"ID":"115d89c5-8038-4b55-9f1d-d0f169ee0b53","Type":"ContainerStarted","Data":"31a0af7b667010f12dd92d2c3d2bdcf8d785c222dccec254e7c9ab66ac0c956c"} Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.160906 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"5ea99dce0931642c048cb124d51210d01f68a0c9d1a827e3958df487a4f80d5c"} Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.168871 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f"} Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.246922 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.247270 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.566502 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.685929 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") pod \"52b59817-1d9d-431d-8055-cf98107b89a2\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.686277 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") pod \"52b59817-1d9d-431d-8055-cf98107b89a2\" (UID: \"52b59817-1d9d-431d-8055-cf98107b89a2\") " Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.687673 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52b59817-1d9d-431d-8055-cf98107b89a2" (UID: "52b59817-1d9d-431d-8055-cf98107b89a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.693366 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4" (OuterVolumeSpecName: "kube-api-access-lwlg4") pod "52b59817-1d9d-431d-8055-cf98107b89a2" (UID: "52b59817-1d9d-431d-8055-cf98107b89a2"). InnerVolumeSpecName "kube-api-access-lwlg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.790610 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwlg4\" (UniqueName: \"kubernetes.io/projected/52b59817-1d9d-431d-8055-cf98107b89a2-kube-api-access-lwlg4\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:42 crc kubenswrapper[4593]: I0129 11:15:42.790662 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52b59817-1d9d-431d-8055-cf98107b89a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.004549 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.010658 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.019617 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.038184 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.047976 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.096889 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") pod \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.098540 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") pod \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.098827 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "115d89c5-8038-4b55-9f1d-d0f169ee0b53" (UID: "115d89c5-8038-4b55-9f1d-d0f169ee0b53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.099040 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") pod \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\" (UID: \"fbee97db-a8f1-43e0-ac0b-ec58529b2c03\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.099222 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") pod \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.099364 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") pod \"6d46f220-cb33-4768-91f5-c59e98c41af4\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.100521 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") pod \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.100650 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") pod \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\" (UID: \"1ef7a572-9631-4078-a6ed-419d2a4dfdf9\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.100859 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") pod \"6d46f220-cb33-4768-91f5-c59e98c41af4\" (UID: \"6d46f220-cb33-4768-91f5-c59e98c41af4\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.102039 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") pod \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\" (UID: \"115d89c5-8038-4b55-9f1d-d0f169ee0b53\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.102756 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") pod \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\" (UID: \"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe\") " Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.103563 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115d89c5-8038-4b55-9f1d-d0f169ee0b53-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.101016 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fbee97db-a8f1-43e0-ac0b-ec58529b2c03" (UID: "fbee97db-a8f1-43e0-ac0b-ec58529b2c03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.101416 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" (UID: "7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.102037 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d46f220-cb33-4768-91f5-c59e98c41af4" (UID: "6d46f220-cb33-4768-91f5-c59e98c41af4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.107787 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp" (OuterVolumeSpecName: "kube-api-access-9nxmp") pod "7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" (UID: "7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe"). InnerVolumeSpecName "kube-api-access-9nxmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.110759 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll" (OuterVolumeSpecName: "kube-api-access-l48ll") pod "6d46f220-cb33-4768-91f5-c59e98c41af4" (UID: "6d46f220-cb33-4768-91f5-c59e98c41af4"). InnerVolumeSpecName "kube-api-access-l48ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.116828 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8" (OuterVolumeSpecName: "kube-api-access-xkbv8") pod "fbee97db-a8f1-43e0-ac0b-ec58529b2c03" (UID: "fbee97db-a8f1-43e0-ac0b-ec58529b2c03"). InnerVolumeSpecName "kube-api-access-xkbv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.125032 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2" (OuterVolumeSpecName: "kube-api-access-l7bc2") pod "115d89c5-8038-4b55-9f1d-d0f169ee0b53" (UID: "115d89c5-8038-4b55-9f1d-d0f169ee0b53"). InnerVolumeSpecName "kube-api-access-l7bc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.127798 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl" (OuterVolumeSpecName: "kube-api-access-tshsl") pod "1ef7a572-9631-4078-a6ed-419d2a4dfdf9" (UID: "1ef7a572-9631-4078-a6ed-419d2a4dfdf9"). InnerVolumeSpecName "kube-api-access-tshsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.154447 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ef7a572-9631-4078-a6ed-419d2a4dfdf9" (UID: "1ef7a572-9631-4078-a6ed-419d2a4dfdf9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.183476 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9hskn" event={"ID":"7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe","Type":"ContainerDied","Data":"ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.183520 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9bb89b7c902aa21b9563266c9cfb7fe9ad60b48ff7722f5eef3b62b09f4d0d" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.183581 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9hskn" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.187814 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4c8a-account-create-update-psrpm" event={"ID":"fbee97db-a8f1-43e0-ac0b-ec58529b2c03","Type":"ContainerDied","Data":"52c3d566be62f7b3d906eb419cd5398b1f874dac4318e3b655d95285b1760187"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.188191 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c3d566be62f7b3d906eb419cd5398b1f874dac4318e3b655d95285b1760187" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.187821 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4c8a-account-create-update-psrpm" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.189943 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-140c-account-create-update-csqgp" event={"ID":"1ef7a572-9631-4078-a6ed-419d2a4dfdf9","Type":"ContainerDied","Data":"81b776500c98b0a9276a4f2e3935ca69f3a82dbb87538e400d856f7bf4e5802a"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.189975 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81b776500c98b0a9276a4f2e3935ca69f3a82dbb87538e400d856f7bf4e5802a" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.190027 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-140c-account-create-update-csqgp" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.192694 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vdz52" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.192769 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vdz52" event={"ID":"52b59817-1d9d-431d-8055-cf98107b89a2","Type":"ContainerDied","Data":"b785ccbd805876d6971e08b5433aca3992b45b4e6be43abcc2d0897531f24fb0"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.192829 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b785ccbd805876d6971e08b5433aca3992b45b4e6be43abcc2d0897531f24fb0" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.194313 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-jgv94" event={"ID":"115d89c5-8038-4b55-9f1d-d0f169ee0b53","Type":"ContainerDied","Data":"31a0af7b667010f12dd92d2c3d2bdcf8d785c222dccec254e7c9ab66ac0c956c"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.194342 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a0af7b667010f12dd92d2c3d2bdcf8d785c222dccec254e7c9ab66ac0c956c" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.194392 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-jgv94" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.200015 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"fa7e71015c1b2be01d5f5981751087bd1cea0cca46687ab9c86c925c42c245ce"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.200100 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"90ef97ef119e260947d77b74c01609fa837e2c9223961887abf5012eb91089f8"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.200229 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"1d71f5edac5c04adc917e6e121934d8398671db0557c20eb1573f86276c682d3"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.202384 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-0486-account-create-update-f9r68" event={"ID":"6d46f220-cb33-4768-91f5-c59e98c41af4","Type":"ContainerDied","Data":"a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db"} Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.202409 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-0486-account-create-update-f9r68" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.202416 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e37bfdfc03a2951aa661f5cbff45c0faebf3f66c8b535b8e89b5cc0fa0f8db" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206768 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206795 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tshsl\" (UniqueName: \"kubernetes.io/projected/1ef7a572-9631-4078-a6ed-419d2a4dfdf9-kube-api-access-tshsl\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206807 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l48ll\" (UniqueName: \"kubernetes.io/projected/6d46f220-cb33-4768-91f5-c59e98c41af4-kube-api-access-l48ll\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206816 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7bc2\" (UniqueName: \"kubernetes.io/projected/115d89c5-8038-4b55-9f1d-d0f169ee0b53-kube-api-access-l7bc2\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206825 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxmp\" (UniqueName: \"kubernetes.io/projected/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-kube-api-access-9nxmp\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206834 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkbv8\" (UniqueName: \"kubernetes.io/projected/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-kube-api-access-xkbv8\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206843 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fbee97db-a8f1-43e0-ac0b-ec58529b2c03-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206852 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.206861 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d46f220-cb33-4768-91f5-c59e98c41af4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:43 crc kubenswrapper[4593]: I0129 11:15:43.351514 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:15:43 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:15:43 crc kubenswrapper[4593]: > Jan 29 11:15:47 crc kubenswrapper[4593]: I0129 11:15:47.247403 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wzm6z" event={"ID":"9c0b4a25-540c-47dd-96fb-fdc6872721b5","Type":"ContainerStarted","Data":"b2e16a35b6612eefbbea849496217b01c0c3973f0a5bc7ad6ae362ff548b8cf0"} Jan 29 11:15:47 crc kubenswrapper[4593]: I0129 11:15:47.263965 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wzm6z" podStartSLOduration=4.648576027 podStartE2EDuration="11.263941973s" podCreationTimestamp="2026-01-29 11:15:36 +0000 UTC" firstStartedPulling="2026-01-29 11:15:40.127279466 +0000 UTC m=+1006.000313657" lastFinishedPulling="2026-01-29 11:15:46.742645412 +0000 UTC m=+1012.615679603" observedRunningTime="2026-01-29 11:15:47.261426876 +0000 UTC m=+1013.134461077" watchObservedRunningTime="2026-01-29 11:15:47.263941973 +0000 UTC m=+1013.136976164" Jan 29 11:15:49 crc kubenswrapper[4593]: I0129 11:15:49.273587 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"c22af3db1da1ae1129b4ec6fe15d486bf3eacf9f0173cc870a43a6edb37e08ac"} Jan 29 11:15:49 crc kubenswrapper[4593]: I0129 11:15:49.273993 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"8533b35814b444f9ddc2d79d0a6e8fb8e59a8ae2d286b48ff34f52ab8340e70e"} Jan 29 11:15:50 crc kubenswrapper[4593]: I0129 11:15:50.289050 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"f9fba7c0509323453d3cf6ed2a1801c969ce5c3c1a673fb0c483cea4ca0554e7"} Jan 29 11:15:50 crc kubenswrapper[4593]: I0129 11:15:50.289096 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"43dff5e70b4ad12e2e56d06fc999ce3dd5f51c617c48da5ef14dfbd5eb6bb928"} Jan 29 11:15:51 crc kubenswrapper[4593]: I0129 11:15:51.303238 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f" exitCode=0 Jan 29 11:15:51 crc kubenswrapper[4593]: I0129 11:15:51.303290 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.311905 4593 generic.go:334] "Generic (PLEG): container finished" podID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" containerID="b2e16a35b6612eefbbea849496217b01c0c3973f0a5bc7ad6ae362ff548b8cf0" exitCode=0 Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.311988 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wzm6z" event={"ID":"9c0b4a25-540c-47dd-96fb-fdc6872721b5","Type":"ContainerDied","Data":"b2e16a35b6612eefbbea849496217b01c0c3973f0a5bc7ad6ae362ff548b8cf0"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321891 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"8c937d48b6809e97a05669102c342c5012c0365005aae5e341168f784ebf2fe5"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321940 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"9ec374c157e39e6657c33c07e3522999ff1ac300e55747d5335dfb5e0bb6a420"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321951 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"78ed3445d2f7349c2a6010e30322a72662c800595c4d47b86979e008ede84af8"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321963 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"357726caca141c948b187325349278573ec5989439588cb4329e0a6ba0004c78"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321972 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"a6ef676591f532dcad332fb732fdb48c9f3ec5a0704446d91ee3e7c9d27193e3"} Jan 29 11:15:52 crc kubenswrapper[4593]: I0129 11:15:52.321983 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"ff1fa00004dc29f1cce6c3f17a1cc1ec156454f9b15dc0635164c8dd81f15278"} Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.302332 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:15:53 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:15:53 crc kubenswrapper[4593]: > Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.336963 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"307ad072-fdfc-4c55-8891-bc041d755b1a","Type":"ContainerStarted","Data":"3aa4adc48f32aa56051c740cb98579c90ef0bac7f9e462c434ebd043f8612db0"} Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.339910 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9"} Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.341895 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-db54x" event={"ID":"a6bbbb39-f79c-4647-976b-6225ac21e63b","Type":"ContainerStarted","Data":"6029f6551650b545bead0d4f37b1f5f3a81f76cf7f6f139456a1354a00bcaf99"} Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.383978 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.538099391 podStartE2EDuration="52.383962131s" podCreationTimestamp="2026-01-29 11:15:01 +0000 UTC" firstStartedPulling="2026-01-29 11:15:40.371548424 +0000 UTC m=+1006.244582615" lastFinishedPulling="2026-01-29 11:15:51.217411164 +0000 UTC m=+1017.090445355" observedRunningTime="2026-01-29 11:15:53.383110818 +0000 UTC m=+1019.256145029" watchObservedRunningTime="2026-01-29 11:15:53.383962131 +0000 UTC m=+1019.256996322" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.406060 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-db54x" podStartSLOduration=2.548786906 podStartE2EDuration="34.40603563s" podCreationTimestamp="2026-01-29 11:15:19 +0000 UTC" firstStartedPulling="2026-01-29 11:15:20.797706862 +0000 UTC m=+986.670741053" lastFinishedPulling="2026-01-29 11:15:52.654955586 +0000 UTC m=+1018.527989777" observedRunningTime="2026-01-29 11:15:53.40189901 +0000 UTC m=+1019.274933211" watchObservedRunningTime="2026-01-29 11:15:53.40603563 +0000 UTC m=+1019.279069821" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.428114 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k4l8n" podStartSLOduration=11.667982055 podStartE2EDuration="24.428095898s" podCreationTimestamp="2026-01-29 11:15:29 +0000 UTC" firstStartedPulling="2026-01-29 11:15:40.040784668 +0000 UTC m=+1005.913818859" lastFinishedPulling="2026-01-29 11:15:52.800898511 +0000 UTC m=+1018.673932702" observedRunningTime="2026-01-29 11:15:53.421429401 +0000 UTC m=+1019.294463592" watchObservedRunningTime="2026-01-29 11:15:53.428095898 +0000 UTC m=+1019.301130089" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.677316 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727408 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727829 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d46f220-cb33-4768-91f5-c59e98c41af4" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727846 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d46f220-cb33-4768-91f5-c59e98c41af4" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727859 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727866 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727880 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b59817-1d9d-431d-8055-cf98107b89a2" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727888 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b59817-1d9d-431d-8055-cf98107b89a2" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727897 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6405a039-ae6d-4255-891c-ef8452e19df3" containerName="ovn-config" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727906 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="6405a039-ae6d-4255-891c-ef8452e19df3" containerName="ovn-config" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727923 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727930 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727943 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727949 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727961 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727969 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.727988 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" containerName="keystone-db-sync" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.727995 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" containerName="keystone-db-sync" Jan 29 11:15:53 crc kubenswrapper[4593]: E0129 11:15:53.728016 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d59502-9350-4842-bd01-35d55f0b47fa" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728024 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d59502-9350-4842-bd01-35d55f0b47fa" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728191 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" containerName="keystone-db-sync" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728205 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728220 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d46f220-cb33-4768-91f5-c59e98c41af4" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728233 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728241 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728252 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b59817-1d9d-431d-8055-cf98107b89a2" containerName="mariadb-database-create" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728260 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728270 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d59502-9350-4842-bd01-35d55f0b47fa" containerName="mariadb-account-create-update" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.728279 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="6405a039-ae6d-4255-891c-ef8452e19df3" containerName="ovn-config" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.729121 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.731777 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.746080 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.776398 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") pod \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.776556 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") pod \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.776666 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") pod \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\" (UID: \"9c0b4a25-540c-47dd-96fb-fdc6872721b5\") " Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.786294 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr" (OuterVolumeSpecName: "kube-api-access-gnrhr") pod "9c0b4a25-540c-47dd-96fb-fdc6872721b5" (UID: "9c0b4a25-540c-47dd-96fb-fdc6872721b5"). InnerVolumeSpecName "kube-api-access-gnrhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.836055 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c0b4a25-540c-47dd-96fb-fdc6872721b5" (UID: "9c0b4a25-540c-47dd-96fb-fdc6872721b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.843527 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data" (OuterVolumeSpecName: "config-data") pod "9c0b4a25-540c-47dd-96fb-fdc6872721b5" (UID: "9c0b4a25-540c-47dd-96fb-fdc6872721b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.878951 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879019 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879057 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879178 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879222 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879246 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879300 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879314 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnrhr\" (UniqueName: \"kubernetes.io/projected/9c0b4a25-540c-47dd-96fb-fdc6872721b5-kube-api-access-gnrhr\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.879324 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c0b4a25-540c-47dd-96fb-fdc6872721b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.982740 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983279 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983384 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983532 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983553 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.983893 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.984245 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.984421 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.984645 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.984817 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:53 crc kubenswrapper[4593]: I0129 11:15:53.985509 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.004250 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") pod \"dnsmasq-dns-5c79d794d7-4tqv8\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.048016 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.362556 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wzm6z" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.362713 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wzm6z" event={"ID":"9c0b4a25-540c-47dd-96fb-fdc6872721b5","Type":"ContainerDied","Data":"19987dc4123000c07157f5b274ec3539c6844f271738b4bce8683858a4a97786"} Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.363985 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19987dc4123000c07157f5b274ec3539c6844f271738b4bce8683858a4a97786" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.631419 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.661917 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.663721 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.667804 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.668091 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.668199 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.668256 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.668345 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.756826 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.783815 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.804990 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805029 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805091 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805140 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805158 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.805189 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.847844 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.849523 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.874777 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908679 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908836 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908860 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908927 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908957 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.908995 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.914693 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.917325 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.917951 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.918870 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.924662 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:54 crc kubenswrapper[4593]: I0129 11:15:54.957225 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") pod \"keystone-bootstrap-k7lbh\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.011560 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.011908 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.011999 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.012040 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.012107 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.012132 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.044401 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.049099 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.061573 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.061742 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhpvr" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.062043 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170015 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170077 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170138 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170272 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.170291 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.171194 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.176180 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.183974 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.221190 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.222300 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.223134 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.223793 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.242844 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.271928 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272031 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272083 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272103 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272144 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.272212 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.298535 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.299971 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.332898 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.333291 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.333407 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.335023 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pkstn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.349386 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") pod \"dnsmasq-dns-5b868669f-fp8w5\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.366611 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386582 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386653 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386699 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386758 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386779 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386796 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386825 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386849 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386870 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.386891 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.388159 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.418552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.445686 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.448469 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.448694 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.451376 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.454181 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" event={"ID":"75b7f494-5bdf-48a0-95a4-745655079166","Type":"ContainerStarted","Data":"2a41223bedf76d4fd1fd63bd5a7474603d89c512636bb2a6267cd36446322174"} Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.460704 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.462831 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.462867 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.463584 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") pod \"cinder-db-sync-qqbm9\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.486183 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.487829 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.487881 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.489168 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.489223 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.489280 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.489764 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.490443 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.495822 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.500481 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.500735 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.505685 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.508437 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.540908 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xg5l8" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.594449 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.594720 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.594762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.614532 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.616716 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") pod \"horizon-579dc58d97-z59ff\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.624811 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.635041 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.635160 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699373 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699409 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699442 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699466 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699603 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699721 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699913 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.699988 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.700011 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.704510 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.713984 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhpvr" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.714166 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.716873 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801572 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801618 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801676 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801699 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801730 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801771 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.801844 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.802985 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") pod \"neutron-db-sync-qt4jn\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.807826 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.808275 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.809528 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.811734 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.827388 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.875624 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.876151 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.881032 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.885338 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.892485 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.911087 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " pod="openstack/ceilometer-0" Jan 29 11:15:55 crc kubenswrapper[4593]: I0129 11:15:55.926968 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.000108 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.013084 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.014352 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.014467 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.014562 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.014916 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.022058 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.023091 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.036327 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qf2gb" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.036571 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.036651 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.057568 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.073371 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.074468 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.084947 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.085091 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.085183 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2pqk2" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.112755 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116227 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116263 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116283 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116313 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116355 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116394 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116429 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116462 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116484 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116522 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116541 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116555 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.116578 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.117032 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.123812 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.124828 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.149541 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.168426 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.169329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") pod \"horizon-5dc699bb9-mhr4g\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220730 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220807 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220854 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220879 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220912 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220943 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220964 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.220982 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.226804 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.234011 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.234303 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.236233 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.244289 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.255691 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.282212 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") pod \"placement-db-sync-dd7hj\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.309611 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.311387 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.311567 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.315824 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") pod \"barbican-db-sync-2wbrt\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322547 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322685 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322722 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322760 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.322782 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.327451 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.368386 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.425382 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.425450 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.429619 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.436469 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.437027 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.437052 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.437108 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.438046 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.438243 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.438620 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.439404 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.458502 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dd7hj" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.461472 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.486043 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") pod \"dnsmasq-dns-cf78879c9-kpbz6\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.522914 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.531346 4593 generic.go:334] "Generic (PLEG): container finished" podID="75b7f494-5bdf-48a0-95a4-745655079166" containerID="ddb63bd3499a1d03d89e38f1924510a054aae77eea34b67608f0f9a0d9d08549" exitCode=0 Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.531510 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" event={"ID":"75b7f494-5bdf-48a0-95a4-745655079166","Type":"ContainerDied","Data":"ddb63bd3499a1d03d89e38f1924510a054aae77eea34b67608f0f9a0d9d08549"} Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.551145 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7lbh" event={"ID":"b3035bcf-246f-4bad-9c08-bd2188aa4098","Type":"ContainerStarted","Data":"f93093eedad3e691c33b05950a5766a9bfd338de35a4024df89e92e1e6b5e974"} Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.603706 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.635671 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:15:56 crc kubenswrapper[4593]: I0129 11:15:56.829282 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.287961 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.345166 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.419729 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.439075 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.491760 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.577146 4593 generic.go:334] "Generic (PLEG): container finished" podID="622ba42a-ba2c-4296-a192-4342eca1ac9c" containerID="746618261d342c822d0641c0709710a02daa46246cf61c311c0480573cb3deb9" exitCode=0 Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.577213 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" event={"ID":"622ba42a-ba2c-4296-a192-4342eca1ac9c","Type":"ContainerDied","Data":"746618261d342c822d0641c0709710a02daa46246cf61c311c0480573cb3deb9"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.577243 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" event={"ID":"622ba42a-ba2c-4296-a192-4342eca1ac9c","Type":"ContainerStarted","Data":"213e53d8a008fd4b685317395335491ab3da62d8c0fe3cb7974f899383c50b68"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.583801 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f789a029-2899-4cb2-8b99-55b77db98b9f","Type":"ContainerStarted","Data":"81e674e8a5ccd570da2b45a02c26820c6aece1f8b0def79a73d4b051b04177a1"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.593858 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.593983 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.594032 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.594051 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.594203 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.594228 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") pod \"75b7f494-5bdf-48a0-95a4-745655079166\" (UID: \"75b7f494-5bdf-48a0-95a4-745655079166\") " Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.597193 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qt4jn" event={"ID":"1563c063-cd19-4793-97c0-45ca3e4a3e0c","Type":"ContainerStarted","Data":"e190e45570748f76e4003c2271bb97bb9945d02157bf9978762b8a5417306bd1"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.625970 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dc699bb9-mhr4g" event={"ID":"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4","Type":"ContainerStarted","Data":"89376b5d197b69125b3a6abd1f18c2e1c2f09575f848fb7b067180fd45d54911"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.639008 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" event={"ID":"75b7f494-5bdf-48a0-95a4-745655079166","Type":"ContainerDied","Data":"2a41223bedf76d4fd1fd63bd5a7474603d89c512636bb2a6267cd36446322174"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.639331 4593 scope.go:117] "RemoveContainer" containerID="ddb63bd3499a1d03d89e38f1924510a054aae77eea34b67608f0f9a0d9d08549" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.640136 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-4tqv8" Jan 29 11:15:57 crc kubenswrapper[4593]: W0129 11:15:57.643169 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fe4b5cd_471d_49d2_bf2b_c3a6bac48aa9.slice/crio-b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1 WatchSource:0}: Error finding container b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1: Status 404 returned error can't find the container with id b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1 Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.652262 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.654501 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr" (OuterVolumeSpecName: "kube-api-access-4r8wr") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "kube-api-access-4r8wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.657595 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqbm9" event={"ID":"9a0467fe-4786-4231-bf52-8a305e9a4f89","Type":"ContainerStarted","Data":"4a77796204d00631fc171e9b5f3f1adaf76dc3ea5c4251742c0c78ae086cb84b"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.698082 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7lbh" event={"ID":"b3035bcf-246f-4bad-9c08-bd2188aa4098","Type":"ContainerStarted","Data":"d6a963ebfb97713a0a7f5c7f7df33e57f221e22a4c463e45ec8292bcb918f3d4"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.699628 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.700500 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r8wr\" (UniqueName: \"kubernetes.io/projected/75b7f494-5bdf-48a0-95a4-745655079166-kube-api-access-4r8wr\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.706415 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-579dc58d97-z59ff" event={"ID":"c95d7c5f-c170-4c14-966f-acdbfa95582d","Type":"ContainerStarted","Data":"d50c694222ceb4b9afc6610284cd592d5480cbcc3fe1b8d77d9d22d8a2e395e4"} Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.737813 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k7lbh" podStartSLOduration=3.737792245 podStartE2EDuration="3.737792245s" podCreationTimestamp="2026-01-29 11:15:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:57.732176726 +0000 UTC m=+1023.605210937" watchObservedRunningTime="2026-01-29 11:15:57.737792245 +0000 UTC m=+1023.610826446" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.782090 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.795691 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config" (OuterVolumeSpecName: "config") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.801035 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.802260 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.802287 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.832225 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.859541 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.861190 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75b7f494-5bdf-48a0-95a4-745655079166" (UID: "75b7f494-5bdf-48a0-95a4-745655079166"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.904047 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.904077 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:57 crc kubenswrapper[4593]: I0129 11:15:57.904087 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b7f494-5bdf-48a0-95a4-745655079166-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.138057 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.183117 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.185485 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-4tqv8"] Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253217 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253288 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253398 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253704 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253734 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.253795 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") pod \"622ba42a-ba2c-4296-a192-4342eca1ac9c\" (UID: \"622ba42a-ba2c-4296-a192-4342eca1ac9c\") " Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.271232 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89" (OuterVolumeSpecName: "kube-api-access-j2f89") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "kube-api-access-j2f89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.302573 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.303619 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.315564 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config" (OuterVolumeSpecName: "config") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.321391 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.326175 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "622ba42a-ba2c-4296-a192-4342eca1ac9c" (UID: "622ba42a-ba2c-4296-a192-4342eca1ac9c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358068 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358107 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358121 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2f89\" (UniqueName: \"kubernetes.io/projected/622ba42a-ba2c-4296-a192-4342eca1ac9c-kube-api-access-j2f89\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358130 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358138 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.358145 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/622ba42a-ba2c-4296-a192-4342eca1ac9c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.730612 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" event={"ID":"622ba42a-ba2c-4296-a192-4342eca1ac9c","Type":"ContainerDied","Data":"213e53d8a008fd4b685317395335491ab3da62d8c0fe3cb7974f899383c50b68"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.730989 4593 scope.go:117] "RemoveContainer" containerID="746618261d342c822d0641c0709710a02daa46246cf61c311c0480573cb3deb9" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.731129 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fp8w5" Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.766906 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qt4jn" event={"ID":"1563c063-cd19-4793-97c0-45ca3e4a3e0c","Type":"ContainerStarted","Data":"b6f550864b30cf24b91a51e513d7e513cf9d2ef7137812c6edc720f9813967f9"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.785405 4593 generic.go:334] "Generic (PLEG): container finished" podID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerID="b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13" exitCode=0 Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.785494 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerDied","Data":"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.785528 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerStarted","Data":"27df2f7abd836abf6cd98d3ccb15264008f2c53f8cce156f8a156ba7ca552d82"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.814619 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dd7hj" event={"ID":"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9","Type":"ContainerStarted","Data":"b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.832709 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.845577 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2wbrt" event={"ID":"c39458c0-d624-4ed0-8444-417e479028d2","Type":"ContainerStarted","Data":"48df691aa2eae747d4bfbb1c9e2a92cb2fce2abef2c0b184a7c467030b299d90"} Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.866437 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fp8w5"] Jan 29 11:15:58 crc kubenswrapper[4593]: I0129 11:15:58.881595 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qt4jn" podStartSLOduration=3.8815776079999997 podStartE2EDuration="3.881577608s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:15:58.810407299 +0000 UTC m=+1024.683441490" watchObservedRunningTime="2026-01-29 11:15:58.881577608 +0000 UTC m=+1024.754611799" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.121466 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622ba42a-ba2c-4296-a192-4342eca1ac9c" path="/var/lib/kubelet/pods/622ba42a-ba2c-4296-a192-4342eca1ac9c/volumes" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.681702 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b7f494-5bdf-48a0-95a4-745655079166" path="/var/lib/kubelet/pods/75b7f494-5bdf-48a0-95a4-745655079166/volumes" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.682946 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.683007 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:15:59 crc kubenswrapper[4593]: E0129 11:15:59.684160 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b7f494-5bdf-48a0-95a4-745655079166" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.684181 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b7f494-5bdf-48a0-95a4-745655079166" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: E0129 11:15:59.684219 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622ba42a-ba2c-4296-a192-4342eca1ac9c" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.684227 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="622ba42a-ba2c-4296-a192-4342eca1ac9c" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.687131 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b7f494-5bdf-48a0-95a4-745655079166" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.687169 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="622ba42a-ba2c-4296-a192-4342eca1ac9c" containerName="init" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.688875 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.688916 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.689094 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802580 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802655 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802722 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.802846 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904271 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904317 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904347 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904428 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.904462 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.905594 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.906975 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.907876 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.910144 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:15:59 crc kubenswrapper[4593]: I0129 11:15:59.934407 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") pod \"horizon-54cbb9595c-pxkrk\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.036582 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.053690 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.053757 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.726095 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:16:00 crc kubenswrapper[4593]: I0129 11:16:00.887596 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54cbb9595c-pxkrk" event={"ID":"4eb162fe-a643-47e7-b254-d6f394cc10a3","Type":"ContainerStarted","Data":"133a890db821bdd702c17ce64066fb1c09e02bfe05952cb746dcbd9bf0d47a30"} Jan 29 11:16:01 crc kubenswrapper[4593]: I0129 11:16:01.162555 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:01 crc kubenswrapper[4593]: > Jan 29 11:16:03 crc kubenswrapper[4593]: I0129 11:16:03.314716 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:03 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:03 crc kubenswrapper[4593]: > Jan 29 11:16:03 crc kubenswrapper[4593]: I0129 11:16:03.946743 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:16:03 crc kubenswrapper[4593]: I0129 11:16:03.947279 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.539414 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.581497 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.582823 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.586683 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.612584 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660190 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660270 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660327 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660355 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660385 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660433 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.660477 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.698149 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.728391 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bdffb4784-5zp8q"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.730600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.743471 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bdffb4784-5zp8q"] Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.761922 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.761972 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762014 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762090 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762130 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.762153 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.763445 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.763718 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.764467 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.772268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.775552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.784117 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.806529 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") pod \"horizon-fbf566cdb-kbm9z\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.864748 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-combined-ca-bundle\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.864842 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-tls-certs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.864917 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-scripts\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.864976 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-secret-key\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.865034 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-config-data\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.865137 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8jvx\" (UniqueName: \"kubernetes.io/projected/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-kube-api-access-q8jvx\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.865164 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-logs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.909082 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967360 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-logs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967732 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-combined-ca-bundle\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967777 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-tls-certs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967836 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-scripts\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967886 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-secret-key\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.967927 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-config-data\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.968013 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8jvx\" (UniqueName: \"kubernetes.io/projected/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-kube-api-access-q8jvx\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.970328 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-logs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.971414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-scripts\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.972749 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-config-data\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.973868 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-combined-ca-bundle\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.983649 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-secret-key\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.984283 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-horizon-tls-certs\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:04 crc kubenswrapper[4593]: I0129 11:16:04.985239 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8jvx\" (UniqueName: \"kubernetes.io/projected/be4a01cd-2eb7-48e8-8a7e-eb02f8851188-kube-api-access-q8jvx\") pod \"horizon-5bdffb4784-5zp8q\" (UID: \"be4a01cd-2eb7-48e8-8a7e-eb02f8851188\") " pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.048368 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.276846 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:16:05 crc kubenswrapper[4593]: W0129 11:16:05.280831 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9761a4f_8669_4e74_9f8e_ed8b9778af11.slice/crio-ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf WatchSource:0}: Error finding container ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf: Status 404 returned error can't find the container with id ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.616344 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bdffb4784-5zp8q"] Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.935092 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf"} Jan 29 11:16:05 crc kubenswrapper[4593]: I0129 11:16:05.936136 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"d374a3bfab0e23a81102eb51da83b7c8b58f2c94e01933be70521699b15ff521"} Jan 29 11:16:10 crc kubenswrapper[4593]: I0129 11:16:10.987919 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerStarted","Data":"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3"} Jan 29 11:16:10 crc kubenswrapper[4593]: I0129 11:16:10.989780 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:16:11 crc kubenswrapper[4593]: I0129 11:16:11.026945 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" podStartSLOduration=16.026926767 podStartE2EDuration="16.026926767s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:16:11.025966501 +0000 UTC m=+1036.899000702" watchObservedRunningTime="2026-01-29 11:16:11.026926767 +0000 UTC m=+1036.899960958" Jan 29 11:16:11 crc kubenswrapper[4593]: I0129 11:16:11.103488 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:11 crc kubenswrapper[4593]: > Jan 29 11:16:13 crc kubenswrapper[4593]: I0129 11:16:13.302049 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:13 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:13 crc kubenswrapper[4593]: > Jan 29 11:16:13 crc kubenswrapper[4593]: E0129 11:16:13.717799 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 29 11:16:13 crc kubenswrapper[4593]: E0129 11:16:13.718046 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h65fh5ffh5fbhfbh578h5fch58dh595h545hf6h665h557h64ch546h586h56ch75h8h599h558hc8hb5h5bbh65h8bh554h665h54h5b4h5c8hb9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfxh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f789a029-2899-4cb2-8b99-55b77db98b9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.040754 4593 generic.go:334] "Generic (PLEG): container finished" podID="b3035bcf-246f-4bad-9c08-bd2188aa4098" containerID="d6a963ebfb97713a0a7f5c7f7df33e57f221e22a4c463e45ec8292bcb918f3d4" exitCode=0 Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.041103 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7lbh" event={"ID":"b3035bcf-246f-4bad-9c08-bd2188aa4098","Type":"ContainerDied","Data":"d6a963ebfb97713a0a7f5c7f7df33e57f221e22a4c463e45ec8292bcb918f3d4"} Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.637826 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.707430 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:16:16 crc kubenswrapper[4593]: I0129 11:16:16.708179 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" containerID="cri-o://3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14" gracePeriod=10 Jan 29 11:16:18 crc kubenswrapper[4593]: I0129 11:16:18.064479 4593 generic.go:334] "Generic (PLEG): container finished" podID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerID="3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14" exitCode=0 Jan 29 11:16:18 crc kubenswrapper[4593]: I0129 11:16:18.064813 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerDied","Data":"3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14"} Jan 29 11:16:21 crc kubenswrapper[4593]: E0129 11:16:21.135189 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:21 crc kubenswrapper[4593]: E0129 11:16:21.135886 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n597hb6hb9h5fch689h5fbh56h86h5f4hf8h685h546hd7h596h5bbhcch67h56ch588h54ch7bh55bh76h5d5h5b9h584h76h67ch654hfdh699h5d9q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2llwr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5dc699bb9-mhr4g_openstack(8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:21 crc kubenswrapper[4593]: I0129 11:16:21.138856 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:21 crc kubenswrapper[4593]: > Jan 29 11:16:21 crc kubenswrapper[4593]: E0129 11:16:21.145966 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5dc699bb9-mhr4g" podUID="8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" Jan 29 11:16:21 crc kubenswrapper[4593]: I0129 11:16:21.510303 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 11:16:23 crc kubenswrapper[4593]: I0129 11:16:23.339892 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:23 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:23 crc kubenswrapper[4593]: > Jan 29 11:16:24 crc kubenswrapper[4593]: E0129 11:16:24.381869 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:24 crc kubenswrapper[4593]: E0129 11:16:24.382850 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h585hb4hd5h5ddh549h568hb9h574h696h555hfdh568h66bh68bh566h58h5d9h5c8h5d7h5dbh556h666h669h5c6h594hdfh579h99h677h54h5bbq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kd28q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-579dc58d97-z59ff_openstack(c95d7c5f-c170-4c14-966f-acdbfa95582d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:24 crc kubenswrapper[4593]: E0129 11:16:24.385783 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-579dc58d97-z59ff" podUID="c95d7c5f-c170-4c14-966f-acdbfa95582d" Jan 29 11:16:27 crc kubenswrapper[4593]: I0129 11:16:27.033234 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 11:16:28 crc kubenswrapper[4593]: E0129 11:16:28.146258 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 29 11:16:28 crc kubenswrapper[4593]: E0129 11:16:28.146453 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q8ts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-dd7hj_openstack(3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:28 crc kubenswrapper[4593]: E0129 11:16:28.148268 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-dd7hj" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.182471 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-579dc58d97-z59ff" event={"ID":"c95d7c5f-c170-4c14-966f-acdbfa95582d","Type":"ContainerDied","Data":"d50c694222ceb4b9afc6610284cd592d5480cbcc3fe1b8d77d9d22d8a2e395e4"} Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.182867 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d50c694222ceb4b9afc6610284cd592d5480cbcc3fe1b8d77d9d22d8a2e395e4" Jan 29 11:16:28 crc kubenswrapper[4593]: E0129 11:16:28.185532 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-dd7hj" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.193904 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365097 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365187 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365270 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365384 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365502 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") pod \"c95d7c5f-c170-4c14-966f-acdbfa95582d\" (UID: \"c95d7c5f-c170-4c14-966f-acdbfa95582d\") " Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.365815 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts" (OuterVolumeSpecName: "scripts") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.366071 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.366270 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data" (OuterVolumeSpecName: "config-data") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.367015 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs" (OuterVolumeSpecName: "logs") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.371309 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q" (OuterVolumeSpecName: "kube-api-access-kd28q") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "kube-api-access-kd28q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.384835 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c95d7c5f-c170-4c14-966f-acdbfa95582d" (UID: "c95d7c5f-c170-4c14-966f-acdbfa95582d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.468029 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c95d7c5f-c170-4c14-966f-acdbfa95582d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.468066 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd28q\" (UniqueName: \"kubernetes.io/projected/c95d7c5f-c170-4c14-966f-acdbfa95582d-kube-api-access-kd28q\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.468658 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c95d7c5f-c170-4c14-966f-acdbfa95582d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:28 crc kubenswrapper[4593]: I0129 11:16:28.468687 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c95d7c5f-c170-4c14-966f-acdbfa95582d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:29 crc kubenswrapper[4593]: I0129 11:16:29.191516 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-579dc58d97-z59ff" Jan 29 11:16:29 crc kubenswrapper[4593]: I0129 11:16:29.231593 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:16:29 crc kubenswrapper[4593]: I0129 11:16:29.240810 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-579dc58d97-z59ff"] Jan 29 11:16:31 crc kubenswrapper[4593]: I0129 11:16:31.086941 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c95d7c5f-c170-4c14-966f-acdbfa95582d" path="/var/lib/kubelet/pods/c95d7c5f-c170-4c14-966f-acdbfa95582d/volumes" Jan 29 11:16:31 crc kubenswrapper[4593]: I0129 11:16:31.108416 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:31 crc kubenswrapper[4593]: > Jan 29 11:16:31 crc kubenswrapper[4593]: I0129 11:16:31.510592 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 11:16:31 crc kubenswrapper[4593]: I0129 11:16:31.510738 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:16:32 crc kubenswrapper[4593]: I0129 11:16:32.297359 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:16:32 crc kubenswrapper[4593]: I0129 11:16:32.355052 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:16:32 crc kubenswrapper[4593]: I0129 11:16:32.547468 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.946438 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.946769 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.946826 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.947605 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:16:33 crc kubenswrapper[4593]: I0129 11:16:33.947696 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d" gracePeriod=600 Jan 29 11:16:34 crc kubenswrapper[4593]: I0129 11:16:34.246168 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d" exitCode=0 Jan 29 11:16:34 crc kubenswrapper[4593]: I0129 11:16:34.246362 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4q5nh" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" containerID="cri-o://40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b" gracePeriod=2 Jan 29 11:16:34 crc kubenswrapper[4593]: I0129 11:16:34.246735 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d"} Jan 29 11:16:34 crc kubenswrapper[4593]: I0129 11:16:34.246777 4593 scope.go:117] "RemoveContainer" containerID="61a3ea70115ab5b387eba2a0b23159462567f420ec0f4cfd86c804f4a4ced4d2" Jan 29 11:16:35 crc kubenswrapper[4593]: I0129 11:16:35.258248 4593 generic.go:334] "Generic (PLEG): container finished" podID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerID="40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b" exitCode=0 Jan 29 11:16:35 crc kubenswrapper[4593]: I0129 11:16:35.258331 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerDied","Data":"40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b"} Jan 29 11:16:36 crc kubenswrapper[4593]: I0129 11:16:36.617421 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.267813 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.268330 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h545h68fh6dh5b8h54bh67bh8h5b8hch5b7h6fh8h556h648h557h5f5h85h54bh4h674h589h5bdh598h94h558h654h5d4h67dh58ch5dh56bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8jvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5bdffb4784-5zp8q_openstack(be4a01cd-2eb7-48e8-8a7e-eb02f8851188): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.270350 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.280315 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.280478 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndbhd5hc4h59dh76h94hd4h698h687h668hch66dh56ch7h5bch5d5hdbh655h5d4h584h54fh7fh6dhdch58bh5b4h645h8ch587h644h647h597q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rm7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-54cbb9595c-pxkrk_openstack(4eb162fe-a643-47e7-b254-d6f394cc10a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:38 crc kubenswrapper[4593]: E0129 11:16:38.283103 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-54cbb9595c-pxkrk" podUID="4eb162fe-a643-47e7-b254-d6f394cc10a3" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.353699 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.442925 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443033 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443159 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443185 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443303 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.443396 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") pod \"b3035bcf-246f-4bad-9c08-bd2188aa4098\" (UID: \"b3035bcf-246f-4bad-9c08-bd2188aa4098\") " Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.469606 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.469660 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts" (OuterVolumeSpecName: "scripts") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.469742 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.501285 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw" (OuterVolumeSpecName: "kube-api-access-tjqtw") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "kube-api-access-tjqtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.504925 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data" (OuterVolumeSpecName: "config-data") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.511947 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3035bcf-246f-4bad-9c08-bd2188aa4098" (UID: "b3035bcf-246f-4bad-9c08-bd2188aa4098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546678 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjqtw\" (UniqueName: \"kubernetes.io/projected/b3035bcf-246f-4bad-9c08-bd2188aa4098-kube-api-access-tjqtw\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546706 4593 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546715 4593 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546723 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546731 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:38 crc kubenswrapper[4593]: I0129 11:16:38.546739 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3035bcf-246f-4bad-9c08-bd2188aa4098-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.096732 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k7lbh" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.102999 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.119414 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k7lbh" event={"ID":"b3035bcf-246f-4bad-9c08-bd2188aa4098","Type":"ContainerDied","Data":"f93093eedad3e691c33b05950a5766a9bfd338de35a4024df89e92e1e6b5e974"} Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.119448 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93093eedad3e691c33b05950a5766a9bfd338de35a4024df89e92e1e6b5e974" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.454741 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.463219 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k7lbh"] Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.556467 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.556991 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3035bcf-246f-4bad-9c08-bd2188aa4098" containerName="keystone-bootstrap" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.557013 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3035bcf-246f-4bad-9c08-bd2188aa4098" containerName="keystone-bootstrap" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.557239 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3035bcf-246f-4bad-9c08-bd2188aa4098" containerName="keystone-bootstrap" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.558182 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.560560 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.560872 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.561610 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.561777 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.561902 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.590809 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.671831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672009 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672178 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672285 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672419 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.672535 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.773786 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.773872 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.773930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.773986 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.774048 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.774110 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.780395 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.781144 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.781291 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.788177 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.788722 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.791781 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") pod \"keystone-bootstrap-8z7b6\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.837827 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.838023 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb5h54bhcdh588h5c8hb4h675h674hb6h566h664hd5h688hbdh68bh5bchf7hf4h578h544h5bch658h698h89h5cdh566h64bh596h555h644h5d5h5f8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bjjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-fbf566cdb-kbm9z_openstack(b9761a4f-8669-4e74-9f8e-ed8b9778af11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.840235 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.894494 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.894668 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h678s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2wbrt_openstack(c39458c0-d624-4ed0-8444-417e479028d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:39 crc kubenswrapper[4593]: E0129 11:16:39.896017 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2wbrt" podUID="c39458c0-d624-4ed0-8444-417e479028d2" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.899589 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:16:39 crc kubenswrapper[4593]: I0129 11:16:39.920520 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.078930 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.079193 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.079270 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.079340 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.079414 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") pod \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\" (UID: \"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4\") " Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.081064 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs" (OuterVolumeSpecName: "logs") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.081914 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data" (OuterVolumeSpecName: "config-data") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.083532 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts" (OuterVolumeSpecName: "scripts") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.086550 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.097254 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr" (OuterVolumeSpecName: "kube-api-access-2llwr") pod "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" (UID: "8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4"). InnerVolumeSpecName "kube-api-access-2llwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.131410 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5dc699bb9-mhr4g" event={"ID":"8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4","Type":"ContainerDied","Data":"89376b5d197b69125b3a6abd1f18c2e1c2f09575f848fb7b067180fd45d54911"} Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.131521 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5dc699bb9-mhr4g" Jan 29 11:16:40 crc kubenswrapper[4593]: E0129 11:16:40.140048 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-2wbrt" podUID="c39458c0-d624-4ed0-8444-417e479028d2" Jan 29 11:16:40 crc kubenswrapper[4593]: E0129 11:16:40.140679 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222257 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222295 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2llwr\" (UniqueName: \"kubernetes.io/projected/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-kube-api-access-2llwr\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222310 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222328 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.222340 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.277696 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:16:40 crc kubenswrapper[4593]: I0129 11:16:40.285020 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5dc699bb9-mhr4g"] Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.092510 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4" path="/var/lib/kubelet/pods/8d5cd67d-1be1-4ac6-85e5-c62ec4c46fe4/volumes" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.100358 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3035bcf-246f-4bad-9c08-bd2188aa4098" path="/var/lib/kubelet/pods/b3035bcf-246f-4bad-9c08-bd2188aa4098/volumes" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.109245 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:41 crc kubenswrapper[4593]: > Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.396814 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.403058 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.413408 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.414165 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.414927 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415013 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415074 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415138 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415218 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415248 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") pod \"4eb162fe-a643-47e7-b254-d6f394cc10a3\" (UID: \"4eb162fe-a643-47e7-b254-d6f394cc10a3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415284 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.415327 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") pod \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\" (UID: \"1dc04f8a-c522-49b8-bdf6-59b7edad2d63\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.414104 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data" (OuterVolumeSpecName: "config-data") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.420403 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts" (OuterVolumeSpecName: "scripts") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.432001 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs" (OuterVolumeSpecName: "logs") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.451525 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.451677 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hb8cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qqbm9_openstack(9a0467fe-4786-4231-bf52-8a305e9a4f89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.453352 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r" (OuterVolumeSpecName: "kube-api-access-8rm7r") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "kube-api-access-8rm7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.453945 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qqbm9" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.458558 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4eb162fe-a643-47e7-b254-d6f394cc10a3" (UID: "4eb162fe-a643-47e7-b254-d6f394cc10a3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.462650 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.470809 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649" (OuterVolumeSpecName: "kube-api-access-h2649") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "kube-api-access-h2649". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.519187 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.522262 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config" (OuterVolumeSpecName: "config") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.525089 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") pod \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.525385 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") pod \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.526330 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") pod \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\" (UID: \"fef7c251-cfb4-4d34-995d-1994b7a8dbe3\") " Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527083 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4eb162fe-a643-47e7-b254-d6f394cc10a3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527108 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527120 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4eb162fe-a643-47e7-b254-d6f394cc10a3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527133 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527143 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4eb162fe-a643-47e7-b254-d6f394cc10a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527165 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rm7r\" (UniqueName: \"kubernetes.io/projected/4eb162fe-a643-47e7-b254-d6f394cc10a3-kube-api-access-8rm7r\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527177 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.527188 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2649\" (UniqueName: \"kubernetes.io/projected/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-kube-api-access-h2649\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.548625 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities" (OuterVolumeSpecName: "utilities") pod "fef7c251-cfb4-4d34-995d-1994b7a8dbe3" (UID: "fef7c251-cfb4-4d34-995d-1994b7a8dbe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.554784 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f" (OuterVolumeSpecName: "kube-api-access-mnh7f") pod "fef7c251-cfb4-4d34-995d-1994b7a8dbe3" (UID: "fef7c251-cfb4-4d34-995d-1994b7a8dbe3"). InnerVolumeSpecName "kube-api-access-mnh7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.567741 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.569682 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1dc04f8a-c522-49b8-bdf6-59b7edad2d63" (UID: "1dc04f8a-c522-49b8-bdf6-59b7edad2d63"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.591012 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fef7c251-cfb4-4d34-995d-1994b7a8dbe3" (UID: "fef7c251-cfb4-4d34-995d-1994b7a8dbe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629478 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629519 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629536 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629550 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dc04f8a-c522-49b8-bdf6-59b7edad2d63-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: I0129 11:16:41.629563 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnh7f\" (UniqueName: \"kubernetes.io/projected/fef7c251-cfb4-4d34-995d-1994b7a8dbe3-kube-api-access-mnh7f\") on node \"crc\" DevicePath \"\"" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.776438 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Jan 29 11:16:41 crc kubenswrapper[4593]: E0129 11:16:41.776662 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f4h65fh5ffh5fbhfbh578h5fch58dh595h545hf6h665h557h64ch546h586h56ch75h8h599h558hc8hb5h5bbh65h8bh554h665h54h5b4h5c8hb9q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfxh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f789a029-2899-4cb2-8b99-55b77db98b9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.155199 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" event={"ID":"1dc04f8a-c522-49b8-bdf6-59b7edad2d63","Type":"ContainerDied","Data":"2b0a11af2b235a2fb8adafd584c05dc53c5aec7086cbb35dcb104dd6b636f9bc"} Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.155486 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-lm2dg" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.155583 4593 scope.go:117] "RemoveContainer" containerID="3463601aba040d487968e25f4e62ebe73e4169690defbbff65cdb06d70d88e14" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.159777 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q5nh" event={"ID":"fef7c251-cfb4-4d34-995d-1994b7a8dbe3","Type":"ContainerDied","Data":"78dbfe42e92421682419cdaea165d73392eb4f589d0fece85d9b2c89989dd32e"} Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.160853 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q5nh" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.161756 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54cbb9595c-pxkrk" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.163856 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54cbb9595c-pxkrk" event={"ID":"4eb162fe-a643-47e7-b254-d6f394cc10a3","Type":"ContainerDied","Data":"133a890db821bdd702c17ce64066fb1c09e02bfe05952cb746dcbd9bf0d47a30"} Jan 29 11:16:42 crc kubenswrapper[4593]: E0129 11:16:42.165335 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qqbm9" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.273224 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.283401 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-lm2dg"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.297700 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.312495 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4q5nh"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.341800 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.354240 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54cbb9595c-pxkrk"] Jan 29 11:16:42 crc kubenswrapper[4593]: I0129 11:16:42.362082 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:16:43 crc kubenswrapper[4593]: I0129 11:16:43.089602 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" path="/var/lib/kubelet/pods/1dc04f8a-c522-49b8-bdf6-59b7edad2d63/volumes" Jan 29 11:16:43 crc kubenswrapper[4593]: I0129 11:16:43.094613 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eb162fe-a643-47e7-b254-d6f394cc10a3" path="/var/lib/kubelet/pods/4eb162fe-a643-47e7-b254-d6f394cc10a3/volumes" Jan 29 11:16:43 crc kubenswrapper[4593]: I0129 11:16:43.095152 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" path="/var/lib/kubelet/pods/fef7c251-cfb4-4d34-995d-1994b7a8dbe3/volumes" Jan 29 11:16:43 crc kubenswrapper[4593]: I0129 11:16:43.715522 4593 scope.go:117] "RemoveContainer" containerID="3a1884f5780e941a8c795fbe0356484ff14b38b8354e043148a53f7b7fef73d5" Jan 29 11:16:44 crc kubenswrapper[4593]: I0129 11:16:44.181036 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8z7b6" event={"ID":"31f590aa-412a-41ab-92fd-2202c9b456b4","Type":"ContainerStarted","Data":"c46c10c9ff8d263c23d78a62789956ef4717d8d84b1c8aaff15cc76667c7e691"} Jan 29 11:16:48 crc kubenswrapper[4593]: I0129 11:16:48.769996 4593 scope.go:117] "RemoveContainer" containerID="40d4746c878ae8363cafa2fcc314b2c7cfd9f6b73acda03b1c6d583170650c6b" Jan 29 11:16:48 crc kubenswrapper[4593]: I0129 11:16:48.864618 4593 scope.go:117] "RemoveContainer" containerID="26d8db7acae03adbd8a96b95ffa16e626d4c4da2a6d0ab63963a1ab8a16e14e7" Jan 29 11:16:48 crc kubenswrapper[4593]: I0129 11:16:48.948928 4593 scope.go:117] "RemoveContainer" containerID="c6e6f1ac55c53b64f5a8d09aab84fcbf98dc6146a8ab819b2f4a3c9dfdc9a62a" Jan 29 11:16:49 crc kubenswrapper[4593]: I0129 11:16:49.227378 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dd7hj" event={"ID":"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9","Type":"ContainerStarted","Data":"0f2f3f0be6cdd2683b007fbff3ab49a0dd093c0aa8e7bd19c6543357b5ba29b3"} Jan 29 11:16:49 crc kubenswrapper[4593]: I0129 11:16:49.229974 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002"} Jan 29 11:16:51 crc kubenswrapper[4593]: I0129 11:16:51.115280 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:16:51 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:16:51 crc kubenswrapper[4593]: > Jan 29 11:16:51 crc kubenswrapper[4593]: I0129 11:16:51.270021 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8z7b6" event={"ID":"31f590aa-412a-41ab-92fd-2202c9b456b4","Type":"ContainerStarted","Data":"dc02c784a57ca12374f0aced757e32f43b54151f61a6897de1dd6a96f158aedc"} Jan 29 11:16:51 crc kubenswrapper[4593]: I0129 11:16:51.295183 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8z7b6" podStartSLOduration=12.295161663 podStartE2EDuration="12.295161663s" podCreationTimestamp="2026-01-29 11:16:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:16:51.282528476 +0000 UTC m=+1077.155562677" watchObservedRunningTime="2026-01-29 11:16:51.295161663 +0000 UTC m=+1077.168195864" Jan 29 11:16:51 crc kubenswrapper[4593]: I0129 11:16:51.329559 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-dd7hj" podStartSLOduration=12.062328605 podStartE2EDuration="56.32954128s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="2026-01-29 11:15:57.667780597 +0000 UTC m=+1023.540814788" lastFinishedPulling="2026-01-29 11:16:41.934993272 +0000 UTC m=+1067.808027463" observedRunningTime="2026-01-29 11:16:51.326171231 +0000 UTC m=+1077.199205462" watchObservedRunningTime="2026-01-29 11:16:51.32954128 +0000 UTC m=+1077.202575471" Jan 29 11:17:01 crc kubenswrapper[4593]: I0129 11:17:01.603688 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:17:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:17:01 crc kubenswrapper[4593]: > Jan 29 11:17:09 crc kubenswrapper[4593]: E0129 11:17:09.851376 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 29 11:17:09 crc kubenswrapper[4593]: E0129 11:17:09.852174 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb5h54bhcdh588h5c8hb4h675h674hb6h566h664hd5h688hbdh68bh5bchf7hf4h578h544h5bch658h698h89h5cdh566h64bh596h555h644h5d5h5f8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bjjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-fbf566cdb-kbm9z_openstack(b9761a4f-8669-4e74-9f8e-ed8b9778af11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:17:10 crc kubenswrapper[4593]: E0129 11:17:10.028893 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Jan 29 11:17:10 crc kubenswrapper[4593]: E0129 11:17:10.029051 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfxh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f789a029-2899-4cb2-8b99-55b77db98b9f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:17:10 crc kubenswrapper[4593]: E0129 11:17:10.216602 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" Jan 29 11:17:10 crc kubenswrapper[4593]: I0129 11:17:10.443394 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996"} Jan 29 11:17:10 crc kubenswrapper[4593]: I0129 11:17:10.461588 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"4ea44b885ada361be4b5f0a32e896db941b82f262b405096f4aa89cb728d6d62"} Jan 29 11:17:10 crc kubenswrapper[4593]: I0129 11:17:10.469303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2wbrt" event={"ID":"c39458c0-d624-4ed0-8444-417e479028d2","Type":"ContainerStarted","Data":"99ff344d90d5bdd893d1e77e101cd6e34638c02acf7127cecbfee61fab7d69ad"} Jan 29 11:17:10 crc kubenswrapper[4593]: I0129 11:17:10.509439 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2wbrt" podStartSLOduration=3.203243207 podStartE2EDuration="1m15.50940547s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="2026-01-29 11:15:57.827011937 +0000 UTC m=+1023.700046128" lastFinishedPulling="2026-01-29 11:17:10.1331742 +0000 UTC m=+1096.006208391" observedRunningTime="2026-01-29 11:17:10.499407183 +0000 UTC m=+1096.372441374" watchObservedRunningTime="2026-01-29 11:17:10.50940547 +0000 UTC m=+1096.382439661" Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.113541 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:17:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:17:11 crc kubenswrapper[4593]: > Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.485243 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8"} Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.489303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1"} Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.521965 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-fbf566cdb-kbm9z" podStartSLOduration=-9223371969.332834 podStartE2EDuration="1m7.52194184s" podCreationTimestamp="2026-01-29 11:16:04 +0000 UTC" firstStartedPulling="2026-01-29 11:16:05.282293227 +0000 UTC m=+1031.155327418" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:11.517184983 +0000 UTC m=+1097.390219184" watchObservedRunningTime="2026-01-29 11:17:11.52194184 +0000 UTC m=+1097.394976031" Jan 29 11:17:11 crc kubenswrapper[4593]: I0129 11:17:11.552436 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bdffb4784-5zp8q" podStartSLOduration=3.303998747 podStartE2EDuration="1m7.552411513s" podCreationTimestamp="2026-01-29 11:16:04 +0000 UTC" firstStartedPulling="2026-01-29 11:16:05.631917367 +0000 UTC m=+1031.504951558" lastFinishedPulling="2026-01-29 11:17:09.880330133 +0000 UTC m=+1095.753364324" observedRunningTime="2026-01-29 11:17:11.544553643 +0000 UTC m=+1097.417587834" watchObservedRunningTime="2026-01-29 11:17:11.552411513 +0000 UTC m=+1097.425445704" Jan 29 11:17:12 crc kubenswrapper[4593]: I0129 11:17:12.525395 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqbm9" event={"ID":"9a0467fe-4786-4231-bf52-8a305e9a4f89","Type":"ContainerStarted","Data":"06197cae1e3adecc87ccca3058356e85b083a773c3ebd8eeabc6c5475d59dd8e"} Jan 29 11:17:12 crc kubenswrapper[4593]: I0129 11:17:12.559682 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qqbm9" podStartSLOduration=4.284647394 podStartE2EDuration="1m17.559661512s" podCreationTimestamp="2026-01-29 11:15:55 +0000 UTC" firstStartedPulling="2026-01-29 11:15:56.857936676 +0000 UTC m=+1022.730970867" lastFinishedPulling="2026-01-29 11:17:10.132950784 +0000 UTC m=+1096.005984985" observedRunningTime="2026-01-29 11:17:12.552994884 +0000 UTC m=+1098.426029075" watchObservedRunningTime="2026-01-29 11:17:12.559661512 +0000 UTC m=+1098.432695703" Jan 29 11:17:13 crc kubenswrapper[4593]: I0129 11:17:13.650259 4593 generic.go:334] "Generic (PLEG): container finished" podID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" containerID="0f2f3f0be6cdd2683b007fbff3ab49a0dd093c0aa8e7bd19c6543357b5ba29b3" exitCode=0 Jan 29 11:17:13 crc kubenswrapper[4593]: I0129 11:17:13.650665 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dd7hj" event={"ID":"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9","Type":"ContainerDied","Data":"0f2f3f0be6cdd2683b007fbff3ab49a0dd093c0aa8e7bd19c6543357b5ba29b3"} Jan 29 11:17:13 crc kubenswrapper[4593]: I0129 11:17:13.657908 4593 generic.go:334] "Generic (PLEG): container finished" podID="31f590aa-412a-41ab-92fd-2202c9b456b4" containerID="dc02c784a57ca12374f0aced757e32f43b54151f61a6897de1dd6a96f158aedc" exitCode=0 Jan 29 11:17:13 crc kubenswrapper[4593]: I0129 11:17:13.657979 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8z7b6" event={"ID":"31f590aa-412a-41ab-92fd-2202c9b456b4","Type":"ContainerDied","Data":"dc02c784a57ca12374f0aced757e32f43b54151f61a6897de1dd6a96f158aedc"} Jan 29 11:17:14 crc kubenswrapper[4593]: I0129 11:17:14.668115 4593 generic.go:334] "Generic (PLEG): container finished" podID="a6bbbb39-f79c-4647-976b-6225ac21e63b" containerID="6029f6551650b545bead0d4f37b1f5f3a81f76cf7f6f139456a1354a00bcaf99" exitCode=0 Jan 29 11:17:14 crc kubenswrapper[4593]: I0129 11:17:14.669161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-db54x" event={"ID":"a6bbbb39-f79c-4647-976b-6225ac21e63b","Type":"ContainerDied","Data":"6029f6551650b545bead0d4f37b1f5f3a81f76cf7f6f139456a1354a00bcaf99"} Jan 29 11:17:14 crc kubenswrapper[4593]: I0129 11:17:14.909961 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:17:14 crc kubenswrapper[4593]: I0129 11:17:14.910317 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.060406 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.060772 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.069761 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190252 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190458 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190501 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190538 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190598 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.190760 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") pod \"31f590aa-412a-41ab-92fd-2202c9b456b4\" (UID: \"31f590aa-412a-41ab-92fd-2202c9b456b4\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.200840 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb" (OuterVolumeSpecName: "kube-api-access-fl8bb") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "kube-api-access-fl8bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.202562 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts" (OuterVolumeSpecName: "scripts") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.210133 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.214145 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.235740 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.241386 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data" (OuterVolumeSpecName: "config-data") pod "31f590aa-412a-41ab-92fd-2202c9b456b4" (UID: "31f590aa-412a-41ab-92fd-2202c9b456b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.249929 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dd7hj" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.292979 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293080 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293184 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293219 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293276 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") pod \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\" (UID: \"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9\") " Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293896 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl8bb\" (UniqueName: \"kubernetes.io/projected/31f590aa-412a-41ab-92fd-2202c9b456b4-kube-api-access-fl8bb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293924 4593 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293935 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293946 4593 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293973 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293985 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f590aa-412a-41ab-92fd-2202c9b456b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.293982 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs" (OuterVolumeSpecName: "logs") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.305211 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts" (OuterVolumeSpecName: "kube-api-access-6q8ts") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "kube-api-access-6q8ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.308680 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts" (OuterVolumeSpecName: "scripts") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.320203 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data" (OuterVolumeSpecName: "config-data") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.333681 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" (UID: "3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434187 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434231 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434244 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434262 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q8ts\" (UniqueName: \"kubernetes.io/projected/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-kube-api-access-6q8ts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.434276 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.685050 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-dd7hj" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.685028 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-dd7hj" event={"ID":"3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9","Type":"ContainerDied","Data":"b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1"} Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.685903 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b372f3c3ba93038ebc9f4d2fddd539867a9e0d0e69241e478915f67681fd81a1" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.693031 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8z7b6" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.693110 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8z7b6" event={"ID":"31f590aa-412a-41ab-92fd-2202c9b456b4","Type":"ContainerDied","Data":"c46c10c9ff8d263c23d78a62789956ef4717d8d84b1c8aaff15cc76667c7e691"} Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.693150 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c46c10c9ff8d263c23d78a62789956ef4717d8d84b1c8aaff15cc76667c7e691" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.926640 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927078 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="init" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927102 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="init" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927121 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="extract-utilities" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927129 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="extract-utilities" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927155 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927163 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927176 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" containerName="placement-db-sync" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927184 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" containerName="placement-db-sync" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927193 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31f590aa-412a-41ab-92fd-2202c9b456b4" containerName="keystone-bootstrap" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927201 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="31f590aa-412a-41ab-92fd-2202c9b456b4" containerName="keystone-bootstrap" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927216 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927223 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" Jan 29 11:17:15 crc kubenswrapper[4593]: E0129 11:17:15.927232 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="extract-content" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927241 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="extract-content" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927452 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc04f8a-c522-49b8-bdf6-59b7edad2d63" containerName="dnsmasq-dns" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927469 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef7c251-cfb4-4d34-995d-1994b7a8dbe3" containerName="registry-server" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927485 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="31f590aa-412a-41ab-92fd-2202c9b456b4" containerName="keystone-bootstrap" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.927506 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" containerName="placement-db-sync" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.928569 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.936451 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.936745 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2pqk2" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.936861 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.936973 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.937069 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.949051 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7f96568f6f-lfzv9"] Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.950698 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.962069 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.962767 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.964342 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.966159 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-h76tz" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.966419 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 29 11:17:15 crc kubenswrapper[4593]: I0129 11:17:15.966576 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.001096 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.002896 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f96568f6f-lfzv9"] Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.050961 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-public-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051049 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wk99\" (UniqueName: \"kubernetes.io/projected/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-kube-api-access-4wk99\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051070 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051227 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-scripts\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051247 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051264 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051421 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-fernet-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051499 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-config-data\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-combined-ca-bundle\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051622 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051839 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-internal-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051919 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.051977 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-credential-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.052011 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.153532 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-scripts\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.185948 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.185991 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186077 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-fernet-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186132 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-config-data\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186169 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-combined-ca-bundle\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186248 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186336 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-internal-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186366 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186417 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-credential-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186446 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186487 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-public-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186734 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wk99\" (UniqueName: \"kubernetes.io/projected/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-kube-api-access-4wk99\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.186770 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.191994 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.159275 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-scripts\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.194064 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.200573 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.200963 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-public-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.205238 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.219207 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.219414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-fernet-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.220938 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.221491 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-combined-ca-bundle\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.221773 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-config-data\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.222071 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-credential-keys\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.224178 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-internal-tls-certs\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.225778 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.232356 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") pod \"placement-669db997bd-hhbcc\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.244419 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wk99\" (UniqueName: \"kubernetes.io/projected/e2e767a2-2e4c-4a41-995f-1f0ca9248d1a-kube-api-access-4wk99\") pod \"keystone-7f96568f6f-lfzv9\" (UID: \"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a\") " pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.245497 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.275716 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.463644 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-869645f564-n6fhc"] Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.465431 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.518781 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-869645f564-n6fhc"] Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.588070 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-db54x" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609778 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-logs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609812 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-combined-ca-bundle\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-config-data\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609878 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-internal-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609901 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-scripts\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609920 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-public-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.609940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djlq5\" (UniqueName: \"kubernetes.io/projected/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-kube-api-access-djlq5\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.712413 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") pod \"a6bbbb39-f79c-4647-976b-6225ac21e63b\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.712452 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") pod \"a6bbbb39-f79c-4647-976b-6225ac21e63b\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.712524 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") pod \"a6bbbb39-f79c-4647-976b-6225ac21e63b\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.712959 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") pod \"a6bbbb39-f79c-4647-976b-6225ac21e63b\" (UID: \"a6bbbb39-f79c-4647-976b-6225ac21e63b\") " Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713228 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-scripts\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713260 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-public-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713280 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djlq5\" (UniqueName: \"kubernetes.io/projected/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-kube-api-access-djlq5\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713390 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-logs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713411 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-combined-ca-bundle\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713463 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-config-data\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.713485 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-internal-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.715097 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-logs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.724775 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-combined-ca-bundle\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.730468 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf" (OuterVolumeSpecName: "kube-api-access-z4lrf") pod "a6bbbb39-f79c-4647-976b-6225ac21e63b" (UID: "a6bbbb39-f79c-4647-976b-6225ac21e63b"). InnerVolumeSpecName "kube-api-access-z4lrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.730911 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-db54x" event={"ID":"a6bbbb39-f79c-4647-976b-6225ac21e63b","Type":"ContainerDied","Data":"75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59"} Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.730959 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75cc780a00b24f186282ea44e59ad68ac3ba85606bfd4c75fd53ab81ca596e59" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.731023 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-db54x" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.731703 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-config-data\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.735596 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djlq5\" (UniqueName: \"kubernetes.io/projected/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-kube-api-access-djlq5\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.741188 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a6bbbb39-f79c-4647-976b-6225ac21e63b" (UID: "a6bbbb39-f79c-4647-976b-6225ac21e63b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.742692 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-internal-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.745391 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-scripts\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.748333 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae8bb4fd-b1d8-4a6a-ac95-9935c4458747-public-tls-certs\") pod \"placement-869645f564-n6fhc\" (UID: \"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747\") " pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.780169 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6bbbb39-f79c-4647-976b-6225ac21e63b" (UID: "a6bbbb39-f79c-4647-976b-6225ac21e63b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.788734 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.830367 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.830719 4593 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.830741 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4lrf\" (UniqueName: \"kubernetes.io/projected/a6bbbb39-f79c-4647-976b-6225ac21e63b-kube-api-access-z4lrf\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.836817 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data" (OuterVolumeSpecName: "config-data") pod "a6bbbb39-f79c-4647-976b-6225ac21e63b" (UID: "a6bbbb39-f79c-4647-976b-6225ac21e63b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:16 crc kubenswrapper[4593]: I0129 11:17:16.932748 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6bbbb39-f79c-4647-976b-6225ac21e63b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.136057 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f96568f6f-lfzv9"] Jan 29 11:17:17 crc kubenswrapper[4593]: W0129 11:17:17.139451 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2e767a2_2e4c_4a41_995f_1f0ca9248d1a.slice/crio-be8bf9e9bc37b34c7dffbadfdc06cede36aee25b1c41ed92602331a8e9090d90 WatchSource:0}: Error finding container be8bf9e9bc37b34c7dffbadfdc06cede36aee25b1c41ed92602331a8e9090d90: Status 404 returned error can't find the container with id be8bf9e9bc37b34c7dffbadfdc06cede36aee25b1c41ed92602331a8e9090d90 Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.206126 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.409411 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:17 crc kubenswrapper[4593]: E0129 11:17:17.415234 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" containerName="glance-db-sync" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.415268 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" containerName="glance-db-sync" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.415517 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" containerName="glance-db-sync" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.416354 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.451168 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550679 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550726 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550772 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550800 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.550879 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.565479 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-869645f564-n6fhc"] Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.674610 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.674694 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.677887 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.678342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.679409 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.679923 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.680525 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.682230 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.683459 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.679211 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.707450 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.708660 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbcrq\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.744326 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.749127 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f96568f6f-lfzv9" event={"ID":"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a","Type":"ContainerStarted","Data":"be8bf9e9bc37b34c7dffbadfdc06cede36aee25b1c41ed92602331a8e9090d90"} Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.767875 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerStarted","Data":"32fdfc7881c963abaad68073c4d49c25e3c8cc05f9fcc814488ad8238d96326b"} Jan 29 11:17:17 crc kubenswrapper[4593]: I0129 11:17:17.775528 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869645f564-n6fhc" event={"ID":"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747","Type":"ContainerStarted","Data":"1c3e9e98f800409a9823c6a497606c5854e95eee895be2ee59cd726addc960dc"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.231970 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.233965 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.242023 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.242761 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.255790 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lfv28" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.287044 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415210 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415335 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415458 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415504 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415566 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415593 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.415660 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528100 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528155 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528201 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528227 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528279 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528318 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.528882 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.529187 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.529200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.534702 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.535171 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.534625 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.580191 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.583828 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.590953 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.613099 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.636245 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.641885 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.649543 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.657363 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.797061 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869645f564-n6fhc" event={"ID":"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747","Type":"ContainerStarted","Data":"594000ead793855509f5118738c4f17be545b8f782da5155ae07305547f20250"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.803892 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f96568f6f-lfzv9" event={"ID":"e2e767a2-2e4c-4a41-995f-1f0ca9248d1a","Type":"ContainerStarted","Data":"74f0241ce60422f1a94e55be9dd85f880e1040fce58ac0dc98969f9d916be9bb"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.805063 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.810169 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerStarted","Data":"fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.810210 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerStarted","Data":"b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c"} Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.811093 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.811125 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.829607 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7f96568f6f-lfzv9" podStartSLOduration=3.82958187 podStartE2EDuration="3.82958187s" podCreationTimestamp="2026-01-29 11:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:18.821375211 +0000 UTC m=+1104.694409402" watchObservedRunningTime="2026-01-29 11:17:18.82958187 +0000 UTC m=+1104.702616071" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.832799 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.832880 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.832930 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.833025 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.833116 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.833158 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.833181 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.857699 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-669db997bd-hhbcc" podStartSLOduration=3.857679569 podStartE2EDuration="3.857679569s" podCreationTimestamp="2026-01-29 11:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:18.843592044 +0000 UTC m=+1104.716626255" watchObservedRunningTime="2026-01-29 11:17:18.857679569 +0000 UTC m=+1104.730713760" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937155 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937244 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937310 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937396 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.937559 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.938701 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.939270 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.942447 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.942765 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.953263 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.955035 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.966036 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:18 crc kubenswrapper[4593]: I0129 11:17:18.984562 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:19 crc kubenswrapper[4593]: I0129 11:17:19.270802 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:20 crc kubenswrapper[4593]: I0129 11:17:20.381061 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:20 crc kubenswrapper[4593]: I0129 11:17:20.503012 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:21 crc kubenswrapper[4593]: I0129 11:17:21.121934 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:17:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:17:21 crc kubenswrapper[4593]: > Jan 29 11:17:22 crc kubenswrapper[4593]: I0129 11:17:22.852732 4593 generic.go:334] "Generic (PLEG): container finished" podID="c39458c0-d624-4ed0-8444-417e479028d2" containerID="99ff344d90d5bdd893d1e77e101cd6e34638c02acf7127cecbfee61fab7d69ad" exitCode=0 Jan 29 11:17:22 crc kubenswrapper[4593]: I0129 11:17:22.852743 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2wbrt" event={"ID":"c39458c0-d624-4ed0-8444-417e479028d2","Type":"ContainerDied","Data":"99ff344d90d5bdd893d1e77e101cd6e34638c02acf7127cecbfee61fab7d69ad"} Jan 29 11:17:24 crc kubenswrapper[4593]: W0129 11:17:24.447758 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7926455_1b18_4907_831f_c8949c999c3e.slice/crio-9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293 WatchSource:0}: Error finding container 9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293: Status 404 returned error can't find the container with id 9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293 Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.589454 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.605998 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") pod \"c39458c0-d624-4ed0-8444-417e479028d2\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.606114 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") pod \"c39458c0-d624-4ed0-8444-417e479028d2\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.606306 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") pod \"c39458c0-d624-4ed0-8444-417e479028d2\" (UID: \"c39458c0-d624-4ed0-8444-417e479028d2\") " Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.615357 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c39458c0-d624-4ed0-8444-417e479028d2" (UID: "c39458c0-d624-4ed0-8444-417e479028d2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.634324 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s" (OuterVolumeSpecName: "kube-api-access-h678s") pod "c39458c0-d624-4ed0-8444-417e479028d2" (UID: "c39458c0-d624-4ed0-8444-417e479028d2"). InnerVolumeSpecName "kube-api-access-h678s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.674720 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c39458c0-d624-4ed0-8444-417e479028d2" (UID: "c39458c0-d624-4ed0-8444-417e479028d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.708093 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h678s\" (UniqueName: \"kubernetes.io/projected/c39458c0-d624-4ed0-8444-417e479028d2-kube-api-access-h678s\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.708125 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.708134 4593 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39458c0-d624-4ed0-8444-417e479028d2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.872951 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" event={"ID":"c7926455-1b18-4907-831f-c8949c999c3e","Type":"ContainerStarted","Data":"9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293"} Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.874198 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2wbrt" event={"ID":"c39458c0-d624-4ed0-8444-417e479028d2","Type":"ContainerDied","Data":"48df691aa2eae747d4bfbb1c9e2a92cb2fce2abef2c0b184a7c467030b299d90"} Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.874233 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48df691aa2eae747d4bfbb1c9e2a92cb2fce2abef2c0b184a7c467030b299d90" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.874292 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2wbrt" Jan 29 11:17:24 crc kubenswrapper[4593]: I0129 11:17:24.912210 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.052341 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.313556 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6cf8bfd486-7dlhx"] Jan 29 11:17:25 crc kubenswrapper[4593]: E0129 11:17:25.325076 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39458c0-d624-4ed0-8444-417e479028d2" containerName="barbican-db-sync" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.325110 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39458c0-d624-4ed0-8444-417e479028d2" containerName="barbican-db-sync" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.325287 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39458c0-d624-4ed0-8444-417e479028d2" containerName="barbican-db-sync" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.326181 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.338112 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.346211 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qf2gb" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.346577 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.346722 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5947965cdc-wl48v"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.348132 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.353096 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.378689 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cf8bfd486-7dlhx"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.409700 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5947965cdc-wl48v"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.422183 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-combined-ca-bundle\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.422522 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwnvt\" (UniqueName: \"kubernetes.io/projected/5f3c398f-928a-4f7e-9e76-6978b8a3673e-kube-api-access-bwnvt\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.422710 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.422958 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423122 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3c398f-928a-4f7e-9e76-6978b8a3673e-logs\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423254 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8v5\" (UniqueName: \"kubernetes.io/projected/564d3b50-7cec-4913-bac8-64af532aa32f-kube-api-access-wg8v5\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423373 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564d3b50-7cec-4913-bac8-64af532aa32f-logs\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423501 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data-custom\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.423625 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.424050 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data-custom\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.520323 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527193 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527268 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3c398f-928a-4f7e-9e76-6978b8a3673e-logs\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527304 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8v5\" (UniqueName: \"kubernetes.io/projected/564d3b50-7cec-4913-bac8-64af532aa32f-kube-api-access-wg8v5\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527333 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564d3b50-7cec-4913-bac8-64af532aa32f-logs\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527370 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data-custom\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527394 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527454 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data-custom\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527513 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-combined-ca-bundle\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527538 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwnvt\" (UniqueName: \"kubernetes.io/projected/5f3c398f-928a-4f7e-9e76-6978b8a3673e-kube-api-access-bwnvt\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.527578 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.531666 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f3c398f-928a-4f7e-9e76-6978b8a3673e-logs\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.535470 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data-custom\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.535523 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data-custom\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.540045 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-config-data\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.542483 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-combined-ca-bundle\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.542857 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/564d3b50-7cec-4913-bac8-64af532aa32f-logs\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.544113 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f3c398f-928a-4f7e-9e76-6978b8a3673e-config-data\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.562978 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564d3b50-7cec-4913-bac8-64af532aa32f-combined-ca-bundle\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.573532 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8v5\" (UniqueName: \"kubernetes.io/projected/564d3b50-7cec-4913-bac8-64af532aa32f-kube-api-access-wg8v5\") pod \"barbican-worker-5947965cdc-wl48v\" (UID: \"564d3b50-7cec-4913-bac8-64af532aa32f\") " pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.588338 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwnvt\" (UniqueName: \"kubernetes.io/projected/5f3c398f-928a-4f7e-9e76-6978b8a3673e-kube-api-access-bwnvt\") pod \"barbican-keystone-listener-6cf8bfd486-7dlhx\" (UID: \"5f3c398f-928a-4f7e-9e76-6978b8a3673e\") " pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.613712 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.615813 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630611 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630685 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630739 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630789 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630845 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.630880 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.684804 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.695015 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.718081 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5947965cdc-wl48v" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735751 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735831 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735900 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735939 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.735999 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.736031 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.737111 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.737711 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.737825 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.737886 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.738246 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.795499 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") pod \"dnsmasq-dns-7c67bffd47-5pw58\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.803683 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.805261 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.810717 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844186 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844528 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844617 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844760 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.844847 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.849907 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946582 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946698 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946735 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946830 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.946885 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.950538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.959068 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.971473 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.976819 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.978416 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:25 crc kubenswrapper[4593]: I0129 11:17:25.981736 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") pod \"barbican-api-766cf76c8b-cjg59\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:26 crc kubenswrapper[4593]: I0129 11:17:26.151613 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:27 crc kubenswrapper[4593]: E0129 11:17:27.271876 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" Jan 29 11:17:27 crc kubenswrapper[4593]: W0129 11:17:27.306937 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda5015_0c28_4ab0_befd_715cb8a987e3.slice/crio-e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229 WatchSource:0}: Error finding container e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229: Status 404 returned error can't find the container with id e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229 Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.311377 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.567685 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.640964 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.661435 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cf8bfd486-7dlhx"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.881597 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.972905 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-869645f564-n6fhc" event={"ID":"ae8bb4fd-b1d8-4a6a-ac95-9935c4458747","Type":"ContainerStarted","Data":"7cb00c01315e420b93a8a3b56f18b13dfdf8bf1aee9c02e62e465749e77fa56e"} Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.974058 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:27 crc kubenswrapper[4593]: I0129 11:17:27.974355 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.022905 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-869645f564-n6fhc" podStartSLOduration=12.02287861 podStartE2EDuration="12.02287861s" podCreationTimestamp="2026-01-29 11:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:28.01015918 +0000 UTC m=+1113.883193361" watchObservedRunningTime="2026-01-29 11:17:28.02287861 +0000 UTC m=+1113.895912801" Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.048170 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f789a029-2899-4cb2-8b99-55b77db98b9f","Type":"ContainerStarted","Data":"7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.048798 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.049230 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerName="proxy-httpd" containerID="cri-o://7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179" gracePeriod=30 Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.094356 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerStarted","Data":"e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.112982 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerStarted","Data":"7d27101e8eb2775200135497bf42bb1e384ed63a353e51a2c079db75d1e60d15"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.155153 4593 generic.go:334] "Generic (PLEG): container finished" podID="9a0467fe-4786-4231-bf52-8a305e9a4f89" containerID="06197cae1e3adecc87ccca3058356e85b083a773c3ebd8eeabc6c5475d59dd8e" exitCode=0 Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.155234 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqbm9" event={"ID":"9a0467fe-4786-4231-bf52-8a305e9a4f89","Type":"ContainerDied","Data":"06197cae1e3adecc87ccca3058356e85b083a773c3ebd8eeabc6c5475d59dd8e"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.175215 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" event={"ID":"5f3c398f-928a-4f7e-9e76-6978b8a3673e","Type":"ContainerStarted","Data":"cc1522410d38eada260e7227deef9aa8a3ddb52ee0c14975ca76ecce47f73dd2"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.177475 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerStarted","Data":"ab6230e4600dcb9af699c78e8e565ba5926552d85dcff4c655fbdfc2c4ef02b3"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.180278 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5947965cdc-wl48v"] Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.193823 4593 generic.go:334] "Generic (PLEG): container finished" podID="c7926455-1b18-4907-831f-c8949c999c3e" containerID="c61dc38ebb9e5834aa0947deaf7f60860b3b4b6689bf4392d11591aefe6c59f7" exitCode=0 Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.194065 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" event={"ID":"c7926455-1b18-4907-831f-c8949c999c3e","Type":"ContainerDied","Data":"c61dc38ebb9e5834aa0947deaf7f60860b3b4b6689bf4392d11591aefe6c59f7"} Jan 29 11:17:28 crc kubenswrapper[4593]: I0129 11:17:28.198942 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerStarted","Data":"05de26e206fafddf17c6b67f5b66ecbc3caad8b51d7a1c1c245985e3b6e06f37"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.249558 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5947965cdc-wl48v" event={"ID":"564d3b50-7cec-4913-bac8-64af532aa32f","Type":"ContainerStarted","Data":"49834deb18122c31f2c3a60696ea136d4cad992a96000561c40ef8b0aa709f3b"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.256686 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerStarted","Data":"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.256742 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerStarted","Data":"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.257384 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.257626 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.285290 4593 generic.go:334] "Generic (PLEG): container finished" podID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerID="7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179" exitCode=0 Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.285401 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f789a029-2899-4cb2-8b99-55b77db98b9f","Type":"ContainerDied","Data":"7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.311556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerStarted","Data":"7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90"} Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.361951 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-766cf76c8b-cjg59" podStartSLOduration=4.361922213 podStartE2EDuration="4.361922213s" podCreationTimestamp="2026-01-29 11:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:29.333749051 +0000 UTC m=+1115.206783242" watchObservedRunningTime="2026-01-29 11:17:29.361922213 +0000 UTC m=+1115.234956514" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.403344 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453348 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453450 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453511 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453560 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453595 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.453716 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") pod \"c7926455-1b18-4907-831f-c8949c999c3e\" (UID: \"c7926455-1b18-4907-831f-c8949c999c3e\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.494359 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n" (OuterVolumeSpecName: "kube-api-access-xrv9n") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "kube-api-access-xrv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.528744 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.547515 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.551183 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.556078 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.557438 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrv9n\" (UniqueName: \"kubernetes.io/projected/c7926455-1b18-4907-831f-c8949c999c3e-kube-api-access-xrv9n\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.557464 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.557473 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.557483 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.615782 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.638071 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config" (OuterVolumeSpecName: "config") pod "c7926455-1b18-4907-831f-c8949c999c3e" (UID: "c7926455-1b18-4907-831f-c8949c999c3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659394 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659517 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659544 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659598 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659616 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659699 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.659726 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") pod \"f789a029-2899-4cb2-8b99-55b77db98b9f\" (UID: \"f789a029-2899-4cb2-8b99-55b77db98b9f\") " Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.660585 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.660611 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7926455-1b18-4907-831f-c8949c999c3e-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.661120 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.716480 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.720340 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.723382 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7" (OuterVolumeSpecName: "kube-api-access-mfxh7") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "kube-api-access-mfxh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.730180 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts" (OuterVolumeSpecName: "scripts") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765890 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfxh7\" (UniqueName: \"kubernetes.io/projected/f789a029-2899-4cb2-8b99-55b77db98b9f-kube-api-access-mfxh7\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765927 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765937 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765946 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.765954 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f789a029-2899-4cb2-8b99-55b77db98b9f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.772666 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data" (OuterVolumeSpecName: "config-data") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.784113 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f789a029-2899-4cb2-8b99-55b77db98b9f" (UID: "f789a029-2899-4cb2-8b99-55b77db98b9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.873907 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:29 crc kubenswrapper[4593]: I0129 11:17:29.874275 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f789a029-2899-4cb2-8b99-55b77db98b9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.046552 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59844fc4b6-zctck"] Jan 29 11:17:30 crc kubenswrapper[4593]: E0129 11:17:30.047327 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerName="proxy-httpd" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.047343 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerName="proxy-httpd" Jan 29 11:17:30 crc kubenswrapper[4593]: E0129 11:17:30.047375 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7926455-1b18-4907-831f-c8949c999c3e" containerName="init" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.047383 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7926455-1b18-4907-831f-c8949c999c3e" containerName="init" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.047607 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7926455-1b18-4907-831f-c8949c999c3e" containerName="init" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.047669 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" containerName="proxy-httpd" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.051383 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.062144 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.062392 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.068490 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59844fc4b6-zctck"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.198748 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgc9p\" (UniqueName: \"kubernetes.io/projected/07d138d8-a5fa-4b77-80e5-924dba8de4c0-kube-api-access-qgc9p\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.198979 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-public-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202213 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d138d8-a5fa-4b77-80e5-924dba8de4c0-logs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202268 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202415 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data-custom\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202500 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-internal-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.202540 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-combined-ca-bundle\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304685 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-public-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304764 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d138d8-a5fa-4b77-80e5-924dba8de4c0-logs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304787 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304841 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data-custom\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-internal-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304900 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-combined-ca-bundle\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.304927 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgc9p\" (UniqueName: \"kubernetes.io/projected/07d138d8-a5fa-4b77-80e5-924dba8de4c0-kube-api-access-qgc9p\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.307143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d138d8-a5fa-4b77-80e5-924dba8de4c0-logs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.312448 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-public-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.313470 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-combined-ca-bundle\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.314986 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-internal-tls-certs\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.332456 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data-custom\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.350894 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07d138d8-a5fa-4b77-80e5-924dba8de4c0-config-data\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.351609 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgc9p\" (UniqueName: \"kubernetes.io/projected/07d138d8-a5fa-4b77-80e5-924dba8de4c0-kube-api-access-qgc9p\") pod \"barbican-api-59844fc4b6-zctck\" (UID: \"07d138d8-a5fa-4b77-80e5-924dba8de4c0\") " pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.374788 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" event={"ID":"c7926455-1b18-4907-831f-c8949c999c3e","Type":"ContainerDied","Data":"9698be0e8b0fc8c29042394525e36839b9d1d98f661056973a2df3fda1b5b293"} Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.374871 4593 scope.go:117] "RemoveContainer" containerID="c61dc38ebb9e5834aa0947deaf7f60860b3b4b6689bf4392d11591aefe6c59f7" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.375083 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbcrq" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.415139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerStarted","Data":"0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb"} Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.443990 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.443995 4593 generic.go:334] "Generic (PLEG): container finished" podID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerID="715e647703b26a590bd9c34541d425220134bcfb800847b738a35414acceb9c1" exitCode=0 Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.444091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerDied","Data":"715e647703b26a590bd9c34541d425220134bcfb800847b738a35414acceb9c1"} Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.445586 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.453089 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.458553 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f789a029-2899-4cb2-8b99-55b77db98b9f","Type":"ContainerDied","Data":"81e674e8a5ccd570da2b45a02c26820c6aece1f8b0def79a73d4b051b04177a1"} Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.458781 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.516862 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517178 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517247 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517272 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517293 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.517408 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") pod \"9a0467fe-4786-4231-bf52-8a305e9a4f89\" (UID: \"9a0467fe-4786-4231-bf52-8a305e9a4f89\") " Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.539407 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts" (OuterVolumeSpecName: "scripts") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.540519 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.552779 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj" (OuterVolumeSpecName: "kube-api-access-hb8cj") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "kube-api-access-hb8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.553756 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.580318 4593 scope.go:117] "RemoveContainer" containerID="7b1eb1b6d901dd51bc728c86fd706225cc5cc281da7ab6e6945cc5b869a8a179" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.622104 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.622150 4593 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9a0467fe-4786-4231-bf52-8a305e9a4f89-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.622164 4593 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.622175 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb8cj\" (UniqueName: \"kubernetes.io/projected/9a0467fe-4786-4231-bf52-8a305e9a4f89-kube-api-access-hb8cj\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.627011 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.647522 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.660056 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbcrq"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.725902 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.787941 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data" (OuterVolumeSpecName: "config-data") pod "9a0467fe-4786-4231-bf52-8a305e9a4f89" (UID: "9a0467fe-4786-4231-bf52-8a305e9a4f89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.832007 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a0467fe-4786-4231-bf52-8a305e9a4f89-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.916268 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.916334 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.927987 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:30 crc kubenswrapper[4593]: E0129 11:17:30.928392 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" containerName="cinder-db-sync" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.928413 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" containerName="cinder-db-sync" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.928582 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" containerName="cinder-db-sync" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.930481 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.937664 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.937677 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:17:30 crc kubenswrapper[4593]: I0129 11:17:30.951401 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038363 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038654 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038693 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038735 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038786 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038825 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.038872 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.089088 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7926455-1b18-4907-831f-c8949c999c3e" path="/var/lib/kubelet/pods/c7926455-1b18-4907-831f-c8949c999c3e/volumes" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.089912 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f789a029-2899-4cb2-8b99-55b77db98b9f" path="/var/lib/kubelet/pods/f789a029-2899-4cb2-8b99-55b77db98b9f/volumes" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.140805 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.140860 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.140907 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.140969 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.141008 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.141047 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.141071 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.179422 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.181897 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.183740 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.185328 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.185385 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.186042 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.209583 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") pod \"ceilometer-0\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.277318 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.331309 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:17:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:17:31 crc kubenswrapper[4593]: > Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.331389 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.332103 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9"} pod="openshift-marketplace/redhat-operators-k4l8n" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.332132 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" containerID="cri-o://392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9" gracePeriod=30 Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.482657 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qqbm9" event={"ID":"9a0467fe-4786-4231-bf52-8a305e9a4f89","Type":"ContainerDied","Data":"4a77796204d00631fc171e9b5f3f1adaf76dc3ea5c4251742c0c78ae086cb84b"} Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.482704 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a77796204d00631fc171e9b5f3f1adaf76dc3ea5c4251742c0c78ae086cb84b" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.482780 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qqbm9" Jan 29 11:17:31 crc kubenswrapper[4593]: I0129 11:17:31.703285 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59844fc4b6-zctck"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.156711 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.174293 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.191206 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.201216 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-jhpvr" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.237018 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.237283 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.237461 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.244157 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.298898 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.298959 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.299044 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.299072 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.299163 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.299199 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.326406 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.372644 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.374138 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405055 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405104 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405179 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405231 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.405251 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.408500 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.428701 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.433794 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.436522 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.444161 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.445031 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.502166 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59844fc4b6-zctck" event={"ID":"07d138d8-a5fa-4b77-80e5-924dba8de4c0","Type":"ContainerStarted","Data":"cc2bf57001fb03a85840206e84847299fc4e42a35c3541ce09565299fe34a0a7"} Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.503486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"0eb50a3ac1f633cc99edb2df912ed9ee0643f4c8b02ce477d7d327cbda5af774"} Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.506963 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507044 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507070 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507123 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507140 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507157 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.507493 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.564152 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609680 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609711 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609758 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609781 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.609805 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.610831 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.611949 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.612579 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.613241 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.614602 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.644364 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") pod \"dnsmasq-dns-5cc8b5d5c5-2q2qb\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.697360 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.925897 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.927472 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:17:32 crc kubenswrapper[4593]: I0129 11:17:32.948921 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.002805 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.023914 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024025 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024132 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024231 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024272 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024423 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.024463 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.130179 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.130883 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.131131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.131401 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.131523 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.131783 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.133976 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.130993 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.134527 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.142437 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.144369 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.145118 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.148253 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.154204 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") pod \"cinder-api-0\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.309031 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.501513 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:17:33 crc kubenswrapper[4593]: W0129 11:17:33.509429 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10756552_28da_4e84_9c43_fb2be288e81f.slice/crio-966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0 WatchSource:0}: Error finding container 966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0: Status 404 returned error can't find the container with id 966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0 Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.519104 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59844fc4b6-zctck" event={"ID":"07d138d8-a5fa-4b77-80e5-924dba8de4c0","Type":"ContainerStarted","Data":"1dcb0d3ad44597fda668b536d2258c06dfb2f2f9f795f671928b7f0edbcbbc80"} Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.535603 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerStarted","Data":"98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d"} Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.535870 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-log" containerID="cri-o://0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb" gracePeriod=30 Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.537235 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-httpd" containerID="cri-o://98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d" gracePeriod=30 Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.591765 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.591744839 podStartE2EDuration="16.591744839s" podCreationTimestamp="2026-01-29 11:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:33.57305608 +0000 UTC m=+1119.446090271" watchObservedRunningTime="2026-01-29 11:17:33.591744839 +0000 UTC m=+1119.464779030" Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.715194 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:17:33 crc kubenswrapper[4593]: I0129 11:17:33.986144 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.596600 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerStarted","Data":"5819a6ffae38a266d2b0e8c7f0f4a9a9ec8806aff42d69e8d72319628c862e12"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.617980 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerStarted","Data":"6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.618040 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-log" containerID="cri-o://7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90" gracePeriod=30 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.618135 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-httpd" containerID="cri-o://6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856" gracePeriod=30 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.641951 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59844fc4b6-zctck" event={"ID":"07d138d8-a5fa-4b77-80e5-924dba8de4c0","Type":"ContainerStarted","Data":"d88270e238fb0280c9be483c689ea2ab0ed9693bd426148cd79f03f059fc5e20"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.642297 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.642464 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.672915 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=17.672893281 podStartE2EDuration="17.672893281s" podCreationTimestamp="2026-01-29 11:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:34.656181735 +0000 UTC m=+1120.529215926" watchObservedRunningTime="2026-01-29 11:17:34.672893281 +0000 UTC m=+1120.545927472" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.705199 4593 generic.go:334] "Generic (PLEG): container finished" podID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerID="98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d" exitCode=143 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.705468 4593 generic.go:334] "Generic (PLEG): container finished" podID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerID="0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb" exitCode=143 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.705279 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerDied","Data":"98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.705608 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerDied","Data":"0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.738276 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59844fc4b6-zctck" podStartSLOduration=4.7382517140000004 podStartE2EDuration="4.738251714s" podCreationTimestamp="2026-01-29 11:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:34.692654438 +0000 UTC m=+1120.565688629" watchObservedRunningTime="2026-01-29 11:17:34.738251714 +0000 UTC m=+1120.611285905" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.769415 4593 generic.go:334] "Generic (PLEG): container finished" podID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerID="b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5" exitCode=0 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.769503 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerDied","Data":"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.769529 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerStarted","Data":"564ff28580e51f15a586a4b36ebebac1a1de37d8a71b76aea863a2b018150e6b"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.847826 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerStarted","Data":"acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432"} Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.847840 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="dnsmasq-dns" containerID="cri-o://acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432" gracePeriod=10 Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.848166 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.859344 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerStarted","Data":"966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0"} Jan 29 11:17:34 crc kubenswrapper[4593]: E0129 11:17:34.902975 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcad93c02_cde3_4a50_9f89_1800d0436d2d.slice/crio-conmon-b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda5015_0c28_4ab0_befd_715cb8a987e3.slice/crio-6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcad93c02_cde3_4a50_9f89_1800d0436d2d.slice/crio-b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda5015_0c28_4ab0_befd_715cb8a987e3.slice/crio-conmon-6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdda5015_0c28_4ab0_befd_715cb8a987e3.slice/crio-conmon-7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:17:34 crc kubenswrapper[4593]: I0129 11:17:34.914671 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.049408 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.122698 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" podStartSLOduration=10.122679453 podStartE2EDuration="10.122679453s" podCreationTimestamp="2026-01-29 11:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:34.895923782 +0000 UTC m=+1120.768957973" watchObservedRunningTime="2026-01-29 11:17:35.122679453 +0000 UTC m=+1120.995713644" Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.179776 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.870132 4593 generic.go:334] "Generic (PLEG): container finished" podID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerID="6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856" exitCode=143 Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.870445 4593 generic.go:334] "Generic (PLEG): container finished" podID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerID="7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90" exitCode=143 Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.870346 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerDied","Data":"6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856"} Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.870527 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerDied","Data":"7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90"} Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.874542 4593 generic.go:334] "Generic (PLEG): container finished" podID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerID="acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432" exitCode=0 Jan 29 11:17:35 crc kubenswrapper[4593]: I0129 11:17:35.874625 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerDied","Data":"acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432"} Jan 29 11:17:36 crc kubenswrapper[4593]: I0129 11:17:36.888889 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerStarted","Data":"6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9"} Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.627999 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789194 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789352 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789430 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789459 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789533 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.789716 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") pod \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\" (UID: \"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.816129 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2" (OuterVolumeSpecName: "kube-api-access-5chz2") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "kube-api-access-5chz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.883369 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.888725 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.891887 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.892341 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.892384 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5chz2\" (UniqueName: \"kubernetes.io/projected/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-kube-api-access-5chz2\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.892396 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.940109 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"50c0ed30-282a-446b-b0cc-f201e07cd2b5","Type":"ContainerDied","Data":"7d27101e8eb2775200135497bf42bb1e384ed63a353e51a2c079db75d1e60d15"} Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.940156 4593 scope.go:117] "RemoveContainer" containerID="98253b10b8a6ff59d034c63fee78761a0019ffa08c9b2a1a3f935e859663925d" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.940279 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.962725 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.963855 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.987119 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config" (OuterVolumeSpecName: "config") pod "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" (UID: "037d1b1d-fa9c-4a8f-8403-46de0acfa1d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997309 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997386 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997463 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997490 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997591 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997616 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.997658 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") pod \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\" (UID: \"50c0ed30-282a-446b-b0cc-f201e07cd2b5\") " Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.998134 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.998147 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:37 crc kubenswrapper[4593]: I0129 11:17:37.998156 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:37.999460 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs" (OuterVolumeSpecName: "logs") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:37.999781 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" event={"ID":"037d1b1d-fa9c-4a8f-8403-46de0acfa1d8","Type":"ContainerDied","Data":"05de26e206fafddf17c6b67f5b66ecbc3caad8b51d7a1c1c245985e3b6e06f37"} Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:37.999867 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-5pw58" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:37.999999 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.011997 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks" (OuterVolumeSpecName: "kube-api-access-nvcks") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "kube-api-access-nvcks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.014543 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.030825 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts" (OuterVolumeSpecName: "scripts") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.059199 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.066545 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-5pw58"] Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103316 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103364 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvcks\" (UniqueName: \"kubernetes.io/projected/50c0ed30-282a-446b-b0cc-f201e07cd2b5-kube-api-access-nvcks\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103375 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103407 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.103419 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/50c0ed30-282a-446b-b0cc-f201e07cd2b5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.123631 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.182074 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.206039 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.253737 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.259743 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data" (OuterVolumeSpecName: "config-data") pod "50c0ed30-282a-446b-b0cc-f201e07cd2b5" (UID: "50c0ed30-282a-446b-b0cc-f201e07cd2b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.308960 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.309021 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c0ed30-282a-446b-b0cc-f201e07cd2b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.455285 4593 scope.go:117] "RemoveContainer" containerID="0140ab8d3e8cf8b2bdec9ddf8c25ab28c14ce7ffb775e171fd3a491b545310cb" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.901384 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.909221 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.924398 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.936845 4593 scope.go:117] "RemoveContainer" containerID="acc2ab1ca6452852fce166472b7f5c7988a09acccb46e0bcd818f3a7b6b7f432" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982272 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982671 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982685 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982701 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982707 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982720 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982725 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982737 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="dnsmasq-dns" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982743 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="dnsmasq-dns" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982750 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="init" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982755 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="init" Jan 29 11:17:38 crc kubenswrapper[4593]: E0129 11:17:38.982762 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982767 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982937 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" containerName="dnsmasq-dns" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982962 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.982984 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-log" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.983000 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.983009 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" containerName="glance-httpd" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.983935 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.988092 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 11:17:38 crc kubenswrapper[4593]: I0129 11:17:38.988425 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.027335 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.062888 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.062988 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063041 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063079 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063096 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063147 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") pod \"fdda5015-0c28-4ab0-befd-715cb8a987e3\" (UID: \"fdda5015-0c28-4ab0-befd-715cb8a987e3\") " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.063960 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.064667 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs" (OuterVolumeSpecName: "logs") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.104938 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172057 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172121 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172154 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172235 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172254 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172323 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172502 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.172871 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.173148 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.173328 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdda5015-0c28-4ab0-befd-715cb8a987e3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.215135 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts" (OuterVolumeSpecName: "scripts") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.215272 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.226916 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037d1b1d-fa9c-4a8f-8403-46de0acfa1d8" path="/var/lib/kubelet/pods/037d1b1d-fa9c-4a8f-8403-46de0acfa1d8/volumes" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.230631 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c0ed30-282a-446b-b0cc-f201e07cd2b5" path="/var/lib/kubelet/pods/50c0ed30-282a-446b-b0cc-f201e07cd2b5/volumes" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.238046 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fdda5015-0c28-4ab0-befd-715cb8a987e3","Type":"ContainerDied","Data":"e6cec3c9c1b7ab68aa8f259aa6f901629b1b0285ad0c92831ba0cfa791bf0229"} Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.255050 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8" (OuterVolumeSpecName: "kube-api-access-dpxq8") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "kube-api-access-dpxq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280514 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280584 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280604 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280621 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280674 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280688 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280722 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280763 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280849 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpxq8\" (UniqueName: \"kubernetes.io/projected/fdda5015-0c28-4ab0-befd-715cb8a987e3-kube-api-access-dpxq8\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280872 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.280881 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.285053 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.290456 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.290852 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.381494 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.384063 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.384259 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.386243 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.386752 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.412449 4593 scope.go:117] "RemoveContainer" containerID="715e647703b26a590bd9c34541d425220134bcfb800847b738a35414acceb9c1" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.470851 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.632816 4593 scope.go:117] "RemoveContainer" containerID="6349a6f4b6a687bdb26600a20fac4d160a672a92f042b686a1b78088c5890856" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.636623 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.665861 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.692360 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.931692 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:39 crc kubenswrapper[4593]: I0129 11:17:39.982856 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data" (OuterVolumeSpecName: "config-data") pod "fdda5015-0c28-4ab0-befd-715cb8a987e3" (UID: "fdda5015-0c28-4ab0-befd-715cb8a987e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.002009 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.002047 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdda5015-0c28-4ab0-befd-715cb8a987e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.247205 4593 scope.go:117] "RemoveContainer" containerID="7c8b245da461c9d7cfe5494a143681450b958c3b59e7d2c1a13483a01ca4bb90" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.247272 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.247315 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.278926 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" event={"ID":"5f3c398f-928a-4f7e-9e76-6978b8a3673e","Type":"ContainerStarted","Data":"b01e456a31a7e0718ddba3b0cda0b5959a52ff29b15286c62a6291d2d96dae2b"} Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.289191 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.314692 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerStarted","Data":"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73"} Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.316005 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.330837 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5947965cdc-wl48v" event={"ID":"564d3b50-7cec-4913-bac8-64af532aa32f","Type":"ContainerStarted","Data":"cfcd2c8094e422e569c7ba510cc7201f5fa7af1a26ec251ecbe01c2340b45374"} Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.352876 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.381717 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.383586 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.393389 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.393597 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.428308 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.429370 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" podStartSLOduration=8.429345486 podStartE2EDuration="8.429345486s" podCreationTimestamp="2026-01-29 11:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:40.35794614 +0000 UTC m=+1126.230980351" watchObservedRunningTime="2026-01-29 11:17:40.429345486 +0000 UTC m=+1126.302379677" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541773 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541819 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541861 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541902 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.541940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.542012 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.542054 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.542087 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.643585 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.643768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646325 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646445 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646493 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646534 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646572 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.646609 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.647313 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.648785 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.652434 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.671719 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.678413 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.684335 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.693090 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.713101 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.747836 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.774852 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:40 crc kubenswrapper[4593]: I0129 11:17:40.819096 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:17:40 crc kubenswrapper[4593]: W0129 11:17:40.913225 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7289daaa_acda_4854_a506_c6cc429562d3.slice/crio-db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4 WatchSource:0}: Error finding container db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4: Status 404 returned error can't find the container with id db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4 Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.155766 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.179509 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdda5015-0c28-4ab0-befd-715cb8a987e3" path="/var/lib/kubelet/pods/fdda5015-0c28-4ab0-befd-715cb8a987e3/volumes" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.200219 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.376310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerStarted","Data":"532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.376525 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api-log" containerID="cri-o://6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9" gracePeriod=30 Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.376871 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.376894 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" containerID="cri-o://532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac" gracePeriod=30 Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.383807 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5947965cdc-wl48v" event={"ID":"564d3b50-7cec-4913-bac8-64af532aa32f","Type":"ContainerStarted","Data":"ab5ff234cb486571f2ea563777120d15ec2665801fe297fa0f02a5645faa2e70"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.389139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerStarted","Data":"49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.391399 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerStarted","Data":"db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.393754 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.408780 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.408757732 podStartE2EDuration="9.408757732s" podCreationTimestamp="2026-01-29 11:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:41.399372132 +0000 UTC m=+1127.272406323" watchObservedRunningTime="2026-01-29 11:17:41.408757732 +0000 UTC m=+1127.281791933" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.429495 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" event={"ID":"5f3c398f-928a-4f7e-9e76-6978b8a3673e","Type":"ContainerStarted","Data":"9f8ba3debfac9d511eedbf82e0f3be84890aaa0c424afc934876a51b18b17b56"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.699150 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5947965cdc-wl48v" podStartSLOduration=7.022099483 podStartE2EDuration="16.699127801s" podCreationTimestamp="2026-01-29 11:17:25 +0000 UTC" firstStartedPulling="2026-01-29 11:17:28.243471287 +0000 UTC m=+1114.116505478" lastFinishedPulling="2026-01-29 11:17:37.920499605 +0000 UTC m=+1123.793533796" observedRunningTime="2026-01-29 11:17:41.515100719 +0000 UTC m=+1127.388134910" watchObservedRunningTime="2026-01-29 11:17:41.699127801 +0000 UTC m=+1127.572162002" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:41.732736 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6cf8bfd486-7dlhx" podStartSLOduration=6.5600329219999995 podStartE2EDuration="16.732716167s" podCreationTimestamp="2026-01-29 11:17:25 +0000 UTC" firstStartedPulling="2026-01-29 11:17:27.746650429 +0000 UTC m=+1113.619684630" lastFinishedPulling="2026-01-29 11:17:37.919333684 +0000 UTC m=+1123.792367875" observedRunningTime="2026-01-29 11:17:41.567244392 +0000 UTC m=+1127.440278583" watchObservedRunningTime="2026-01-29 11:17:41.732716167 +0000 UTC m=+1127.605750358" Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:42.512916 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26"} Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:42.555271 4593 generic.go:334] "Generic (PLEG): container finished" podID="95847704-1027-4518-9f5c-cd663496b804" containerID="6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9" exitCode=143 Jan 29 11:17:42 crc kubenswrapper[4593]: I0129 11:17:42.555359 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerDied","Data":"6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.248740 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.314497 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-869645f564-n6fhc" Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.417169 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.417413 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-669db997bd-hhbcc" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" containerID="cri-o://b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c" gracePeriod=30 Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.418104 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-669db997bd-hhbcc" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" containerID="cri-o://fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80" gracePeriod=30 Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.640459 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerStarted","Data":"4bb371c1c9d2fcc4f80bfb03ebb66d3dd6167a7190179617153d4df635eb3592"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.715683 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerStarted","Data":"24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.756526 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerStarted","Data":"90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.762611 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.546937843 podStartE2EDuration="11.762586756s" podCreationTimestamp="2026-01-29 11:17:32 +0000 UTC" firstStartedPulling="2026-01-29 11:17:33.514048036 +0000 UTC m=+1119.387082227" lastFinishedPulling="2026-01-29 11:17:38.729696949 +0000 UTC m=+1124.602731140" observedRunningTime="2026-01-29 11:17:43.743568978 +0000 UTC m=+1129.616603169" watchObservedRunningTime="2026-01-29 11:17:43.762586756 +0000 UTC m=+1129.635620947" Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.787573 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7"} Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.800258 4593 generic.go:334] "Generic (PLEG): container finished" podID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerID="b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c" exitCode=143 Jan 29 11:17:43 crc kubenswrapper[4593]: I0129 11:17:43.800303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerDied","Data":"b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.195296 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-669db997bd-hhbcc" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.149:8778/\": read tcp 10.217.0.2:54274->10.217.0.149:8778: read: connection reset by peer" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.195301 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-669db997bd-hhbcc" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.149:8778/\": read tcp 10.217.0.2:54282->10.217.0.149:8778: read: connection reset by peer" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.500934 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.500935 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.818445 4593 generic.go:334] "Generic (PLEG): container finished" podID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerID="fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80" exitCode=0 Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.818759 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerDied","Data":"fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.818787 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-669db997bd-hhbcc" event={"ID":"dcf8c6b2-659d-4fbb-82ef-d9749443f647","Type":"ContainerDied","Data":"32fdfc7881c963abaad68073c4d49c25e3c8cc05f9fcc814488ad8238d96326b"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.818797 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32fdfc7881c963abaad68073c4d49c25e3c8cc05f9fcc814488ad8238d96326b" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.823814 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerStarted","Data":"d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.828299 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerStarted","Data":"3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5"} Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.850429 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.850405606 podStartE2EDuration="6.850405606s" podCreationTimestamp="2026-01-29 11:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:44.846991884 +0000 UTC m=+1130.720026075" watchObservedRunningTime="2026-01-29 11:17:44.850405606 +0000 UTC m=+1130.723439797" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.909836 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.909927 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.910858 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996"} pod="openstack/horizon-fbf566cdb-kbm9z" containerMessage="Container horizon failed startup probe, will be restarted" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.910904 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" containerID="cri-o://a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996" gracePeriod=30 Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.938026 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.968532 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.980214 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.980476 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.980688 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.981069 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.981147 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.981209 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") pod \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\" (UID: \"dcf8c6b2-659d-4fbb-82ef-d9749443f647\") " Jan 29 11:17:44 crc kubenswrapper[4593]: I0129 11:17:44.984556 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs" (OuterVolumeSpecName: "logs") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.001384 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m" (OuterVolumeSpecName: "kube-api-access-hb55m") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "kube-api-access-hb55m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.020134 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts" (OuterVolumeSpecName: "scripts") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.100252 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcf8c6b2-659d-4fbb-82ef-d9749443f647-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.100301 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb55m\" (UniqueName: \"kubernetes.io/projected/dcf8c6b2-659d-4fbb-82ef-d9749443f647-kube-api-access-hb55m\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.100320 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.100426 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.248052 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.248777 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.271407 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.272452 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1"} pod="openstack/horizon-5bdffb4784-5zp8q" containerMessage="Container horizon failed startup probe, will be restarted" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.272495 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" containerID="cri-o://948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1" gracePeriod=30 Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.310445 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.321119 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.348767 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data" (OuterVolumeSpecName: "config-data") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.389602 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.428084 4593 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.428119 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.463205 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.463335 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.519575 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dcf8c6b2-659d-4fbb-82ef-d9749443f647" (UID: "dcf8c6b2-659d-4fbb-82ef-d9749443f647"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.529538 4593 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcf8c6b2-659d-4fbb-82ef-d9749443f647-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.878315 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-669db997bd-hhbcc" Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.922433 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:45 crc kubenswrapper[4593]: I0129 11:17:45.937097 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-669db997bd-hhbcc"] Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.198904 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.241881 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.438856 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.524230 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.907184 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerStarted","Data":"964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc"} Jan 29 11:17:46 crc kubenswrapper[4593]: I0129 11:17:46.936182 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.936159215 podStartE2EDuration="6.936159215s" podCreationTimestamp="2026-01-29 11:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:17:46.926195279 +0000 UTC m=+1132.799229480" watchObservedRunningTime="2026-01-29 11:17:46.936159215 +0000 UTC m=+1132.809193406" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.089982 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" path="/var/lib/kubelet/pods/dcf8c6b2-659d-4fbb-82ef-d9749443f647/volumes" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.564966 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.567610 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.161:8080/\": dial tcp 10.217.0.161:8080: connect: connection refused" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.698882 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.817580 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:17:47 crc kubenswrapper[4593]: I0129 11:17:47.818429 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="dnsmasq-dns" containerID="cri-o://d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" gracePeriod=10 Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.000726 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerStarted","Data":"88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928"} Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.001084 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.066939 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.534677876 podStartE2EDuration="18.06691068s" podCreationTimestamp="2026-01-29 11:17:30 +0000 UTC" firstStartedPulling="2026-01-29 11:17:32.152156942 +0000 UTC m=+1118.025191143" lastFinishedPulling="2026-01-29 11:17:46.684389756 +0000 UTC m=+1132.557423947" observedRunningTime="2026-01-29 11:17:48.055934237 +0000 UTC m=+1133.928968428" watchObservedRunningTime="2026-01-29 11:17:48.06691068 +0000 UTC m=+1133.939944871" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.723457 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.798884 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.798975 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.799062 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.799093 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.799189 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.799223 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") pod \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\" (UID: \"8fb458d5-4cf6-41ed-bf24-cc63387a17f8\") " Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.855478 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l" (OuterVolumeSpecName: "kube-api-access-66q5l") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "kube-api-access-66q5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.902070 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66q5l\" (UniqueName: \"kubernetes.io/projected/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-kube-api-access-66q5l\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.943663 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.955908 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:48 crc kubenswrapper[4593]: I0129 11:17:48.982125 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.006620 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.006678 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.006692 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.014981 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config" (OuterVolumeSpecName: "config") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.017107 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8fb458d5-4cf6-41ed-bf24-cc63387a17f8" (UID: "8fb458d5-4cf6-41ed-bf24-cc63387a17f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.063966 4593 generic.go:334] "Generic (PLEG): container finished" podID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerID="d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" exitCode=0 Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.064792 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.065006 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerDied","Data":"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3"} Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.065046 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-kpbz6" event={"ID":"8fb458d5-4cf6-41ed-bf24-cc63387a17f8","Type":"ContainerDied","Data":"27df2f7abd836abf6cd98d3ccb15264008f2c53f8cce156f8a156ba7ca552d82"} Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.065068 4593 scope.go:117] "RemoveContainer" containerID="d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.114816 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.114851 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fb458d5-4cf6-41ed-bf24-cc63387a17f8-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.189007 4593 scope.go:117] "RemoveContainer" containerID="b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.221702 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.240686 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-kpbz6"] Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.249866 4593 scope.go:117] "RemoveContainer" containerID="d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" Jan 29 11:17:49 crc kubenswrapper[4593]: E0129 11:17:49.261877 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3\": container with ID starting with d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3 not found: ID does not exist" containerID="d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.261928 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3"} err="failed to get container status \"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3\": rpc error: code = NotFound desc = could not find container \"d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3\": container with ID starting with d8c466a4721e4e80dcee5d6fc306b00a8e2528371b38488f0d7c1d298edbb2a3 not found: ID does not exist" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.261974 4593 scope.go:117] "RemoveContainer" containerID="b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13" Jan 29 11:17:49 crc kubenswrapper[4593]: E0129 11:17:49.265788 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13\": container with ID starting with b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13 not found: ID does not exist" containerID="b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.265829 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13"} err="failed to get container status \"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13\": rpc error: code = NotFound desc = could not find container \"b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13\": container with ID starting with b1b07f2017de0e2352ba6afacb58d27c6112126cb7e7975a5838969dfa72ee13 not found: ID does not exist" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.509859 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.510223 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.668201 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.668653 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.767649 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.842470 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:17:49 crc kubenswrapper[4593]: I0129 11:17:49.887555 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.075169 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.075681 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.473831 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.474162 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59844fc4b6-zctck" podUID="07d138d8-a5fa-4b77-80e5-924dba8de4c0" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.159:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.498313 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.532078 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59844fc4b6-zctck" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.620968 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.621441 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" containerID="cri-o://a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" gracePeriod=30 Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.627591 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" containerID="cri-o://3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" gracePeriod=30 Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.775947 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.776293 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.974041 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:50 crc kubenswrapper[4593]: I0129 11:17:50.993325 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.096507 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" path="/var/lib/kubelet/pods/8fb458d5-4cf6-41ed-bf24-cc63387a17f8/volumes" Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.107815 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerID="a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" exitCode=143 Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.109049 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerDied","Data":"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4"} Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.109180 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:51 crc kubenswrapper[4593]: I0129 11:17:51.109243 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:52 crc kubenswrapper[4593]: I0129 11:17:52.114974 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:52 crc kubenswrapper[4593]: I0129 11:17:52.115204 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:52 crc kubenswrapper[4593]: I0129 11:17:52.565781 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.161:8080/\": dial tcp 10.217.0.161:8080: connect: connection refused" Jan 29 11:17:53 crc kubenswrapper[4593]: I0129 11:17:53.123499 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:53 crc kubenswrapper[4593]: I0129 11:17:53.124395 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:53 crc kubenswrapper[4593]: I0129 11:17:53.350876 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:53 crc kubenswrapper[4593]: I0129 11:17:53.846118 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7f96568f6f-lfzv9" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.147184 4593 generic.go:334] "Generic (PLEG): container finished" podID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" containerID="b6f550864b30cf24b91a51e513d7e513cf9d2ef7137812c6edc720f9813967f9" exitCode=0 Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.147285 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qt4jn" event={"ID":"1563c063-cd19-4793-97c0-45ca3e4a3e0c","Type":"ContainerDied","Data":"b6f550864b30cf24b91a51e513d7e513cf9d2ef7137812c6edc720f9813967f9"} Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.414979 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 29 11:17:55 crc kubenswrapper[4593]: E0129 11:17:55.415383 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="init" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415403 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="init" Jan 29 11:17:55 crc kubenswrapper[4593]: E0129 11:17:55.415415 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="dnsmasq-dns" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415423 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="dnsmasq-dns" Jan 29 11:17:55 crc kubenswrapper[4593]: E0129 11:17:55.415439 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415446 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" Jan 29 11:17:55 crc kubenswrapper[4593]: E0129 11:17:55.415472 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415478 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415686 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-api" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415707 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf8c6b2-659d-4fbb-82ef-d9749443f647" containerName="placement-log" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.415719 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb458d5-4cf6-41ed-bf24-cc63387a17f8" containerName="dnsmasq-dns" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.416381 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.430376 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.430626 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-pbt57" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.441691 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.463249 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.463382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.463410 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgpjd\" (UniqueName: \"kubernetes.io/projected/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-kube-api-access-tgpjd\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.463426 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.466228 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.565171 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.565218 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.565235 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgpjd\" (UniqueName: \"kubernetes.io/projected/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-kube-api-access-tgpjd\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.565302 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.567138 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.573324 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-openstack-config-secret\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.589437 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-combined-ca-bundle\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.594381 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgpjd\" (UniqueName: \"kubernetes.io/projected/220bdfcb-98c4-4c78-8d95-ea64edfaf1ab-kube-api-access-tgpjd\") pod \"openstackclient\" (UID: \"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab\") " pod="openstack/openstackclient" Jan 29 11:17:55 crc kubenswrapper[4593]: I0129 11:17:55.789046 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.237729 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.237734 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.648280 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.660804 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:38732->10.217.0.158:9311: read: connection reset by peer" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.661089 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-766cf76c8b-cjg59" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:38716->10.217.0.158:9311: read: connection reset by peer" Jan 29 11:17:56 crc kubenswrapper[4593]: I0129 11:17:56.893020 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.008925 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") pod \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.009360 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") pod \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.009413 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") pod \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\" (UID: \"1563c063-cd19-4793-97c0-45ca3e4a3e0c\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.048200 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv" (OuterVolumeSpecName: "kube-api-access-59ccv") pod "1563c063-cd19-4793-97c0-45ca3e4a3e0c" (UID: "1563c063-cd19-4793-97c0-45ca3e4a3e0c"). InnerVolumeSpecName "kube-api-access-59ccv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.098829 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1563c063-cd19-4793-97c0-45ca3e4a3e0c" (UID: "1563c063-cd19-4793-97c0-45ca3e4a3e0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.099167 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config" (OuterVolumeSpecName: "config") pod "1563c063-cd19-4793-97c0-45ca3e4a3e0c" (UID: "1563c063-cd19-4793-97c0-45ca3e4a3e0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.112583 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.112621 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59ccv\" (UniqueName: \"kubernetes.io/projected/1563c063-cd19-4793-97c0-45ca3e4a3e0c-kube-api-access-59ccv\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.112657 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1563c063-cd19-4793-97c0-45ca3e4a3e0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.157712 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214344 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214401 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214447 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214472 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.214540 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") pod \"f5d54c2a-3590-4623-8641-e3906d9ef79e\" (UID: \"f5d54c2a-3590-4623-8641-e3906d9ef79e\") " Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.216901 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs" (OuterVolumeSpecName: "logs") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.222756 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb" (OuterVolumeSpecName: "kube-api-access-bf5mb") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "kube-api-access-bf5mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229042 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerID="3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" exitCode=0 Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229190 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766cf76c8b-cjg59" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229348 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerDied","Data":"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53"} Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229393 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766cf76c8b-cjg59" event={"ID":"f5d54c2a-3590-4623-8641-e3906d9ef79e","Type":"ContainerDied","Data":"ab6230e4600dcb9af699c78e8e565ba5926552d85dcff4c655fbdfc2c4ef02b3"} Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.229411 4593 scope.go:117] "RemoveContainer" containerID="3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.231052 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab","Type":"ContainerStarted","Data":"307341a79971f8d77af36b3ff21c83ffc9327dc92bd703679c1d5bcd8132b20d"} Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.232782 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.241508 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qt4jn" event={"ID":"1563c063-cd19-4793-97c0-45ca3e4a3e0c","Type":"ContainerDied","Data":"e190e45570748f76e4003c2271bb97bb9945d02157bf9978762b8a5417306bd1"} Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.241557 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e190e45570748f76e4003c2271bb97bb9945d02157bf9978762b8a5417306bd1" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.241644 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qt4jn" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.259227 4593 scope.go:117] "RemoveContainer" containerID="a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.292196 4593 scope.go:117] "RemoveContainer" containerID="3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.294605 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53\": container with ID starting with 3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53 not found: ID does not exist" containerID="3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.294796 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53"} err="failed to get container status \"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53\": rpc error: code = NotFound desc = could not find container \"3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53\": container with ID starting with 3f764c87c1c674ee266ec11d50ead3b253a7e265b0c6c1414e01734443361b53 not found: ID does not exist" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.294901 4593 scope.go:117] "RemoveContainer" containerID="a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.295520 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4\": container with ID starting with a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4 not found: ID does not exist" containerID="a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.295563 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4"} err="failed to get container status \"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4\": rpc error: code = NotFound desc = could not find container \"a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4\": container with ID starting with a01e77fb6bb6bed1e88e5489338322c67e46dee88919c812c4f49227de8602a4 not found: ID does not exist" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.306341 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.310988 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data" (OuterVolumeSpecName: "config-data") pod "f5d54c2a-3590-4623-8641-e3906d9ef79e" (UID: "f5d54c2a-3590-4623-8641-e3906d9ef79e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318008 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318049 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5d54c2a-3590-4623-8641-e3906d9ef79e-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318064 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318075 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf5mb\" (UniqueName: \"kubernetes.io/projected/f5d54c2a-3590-4623-8641-e3906d9ef79e-kube-api-access-bf5mb\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.318089 4593 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5d54c2a-3590-4623-8641-e3906d9ef79e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514392 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.514762 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514786 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.514796 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514802 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" Jan 29 11:17:57 crc kubenswrapper[4593]: E0129 11:17:57.514834 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" containerName="neutron-db-sync" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514840 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" containerName="neutron-db-sync" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.514994 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" containerName="neutron-db-sync" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.515006 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api-log" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.515021 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" containerName="barbican-api" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.515875 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.606570 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.623388 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626296 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626358 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626395 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626453 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626496 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.626522 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.649711 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-766cf76c8b-cjg59"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.702069 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.716291 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.722542 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xg5l8" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.724103 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.724270 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.724454 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728510 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728554 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728598 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728682 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728730 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.728760 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.730008 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.731057 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.736694 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.736765 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.737835 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.738061 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.804540 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") pod \"dnsmasq-dns-6578955fd5-9hb8w\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844240 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844316 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844416 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844766 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.844822 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.863741 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.947944 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.948000 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.948041 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.948065 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.948101 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.956580 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.959664 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.959811 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.981384 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:57 crc kubenswrapper[4593]: I0129 11:17:57.982134 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") pod \"neutron-5dc77db4b8-s2bq6\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.077054 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.391969 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.392522 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.505393 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.608031 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.916621 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:17:58 crc kubenswrapper[4593]: W0129 11:17:58.945103 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf8e6616_b9af_427f_9daa_d62ee3cb24d3.slice/crio-88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526 WatchSource:0}: Error finding container 88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526: Status 404 returned error can't find the container with id 88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526 Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.999511 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:17:58 crc kubenswrapper[4593]: I0129 11:17:58.999652 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.104232 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d54c2a-3590-4623-8641-e3906d9ef79e" path="/var/lib/kubelet/pods/f5d54c2a-3590-4623-8641-e3906d9ef79e/volumes" Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.294284 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerStarted","Data":"88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526"} Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.300974 4593 generic.go:334] "Generic (PLEG): container finished" podID="7aadd015-f714-41cf-b532-396d9f5f3946" containerID="d7d10b40887ad7cb3695100bfd7e2e09a54897e25591da02ac46e6c0d27cc415" exitCode=0 Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.301241 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" containerID="cri-o://49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb" gracePeriod=30 Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.301929 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerDied","Data":"d7d10b40887ad7cb3695100bfd7e2e09a54897e25591da02ac46e6c0d27cc415"} Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.301959 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerStarted","Data":"f371f618c4302fbf0bf3244208980a3b33a4e263434fd709be03f076a3036627"} Jan 29 11:17:59 crc kubenswrapper[4593]: I0129 11:17:59.302313 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="probe" containerID="cri-o://24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047" gracePeriod=30 Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.043506 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.043992 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.051365 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.267348 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.345574 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerStarted","Data":"71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059"} Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.345663 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.373558 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerStarted","Data":"ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff"} Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.373800 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.373893 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerStarted","Data":"09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a"} Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.400779 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" podStartSLOduration=3.400754738 podStartE2EDuration="3.400754738s" podCreationTimestamp="2026-01-29 11:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:00.376614374 +0000 UTC m=+1146.249648565" watchObservedRunningTime="2026-01-29 11:18:00.400754738 +0000 UTC m=+1146.273788929" Jan 29 11:18:00 crc kubenswrapper[4593]: I0129 11:18:00.431527 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5dc77db4b8-s2bq6" podStartSLOduration=3.431499029 podStartE2EDuration="3.431499029s" podCreationTimestamp="2026-01-29 11:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:00.408862665 +0000 UTC m=+1146.281896856" watchObservedRunningTime="2026-01-29 11:18:00.431499029 +0000 UTC m=+1146.304533220" Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.294060 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.433371 4593 generic.go:334] "Generic (PLEG): container finished" podID="10756552-28da-4e84-9c43-fb2be288e81f" containerID="24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047" exitCode=0 Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.433408 4593 generic.go:334] "Generic (PLEG): container finished" podID="10756552-28da-4e84-9c43-fb2be288e81f" containerID="49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb" exitCode=0 Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.434394 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerDied","Data":"24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047"} Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.434431 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerDied","Data":"49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb"} Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.974727 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84867bd7b9-4vrb9"] Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.976460 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.981069 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 29 11:18:01 crc kubenswrapper[4593]: I0129 11:18:01.981386 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.012411 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84867bd7b9-4vrb9"] Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.071546 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.124403 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.124831 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.124919 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.124969 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125151 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125218 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125333 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") pod \"10756552-28da-4e84-9c43-fb2be288e81f\" (UID: \"10756552-28da-4e84-9c43-fb2be288e81f\") " Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125697 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-combined-ca-bundle\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125756 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s6qt\" (UniqueName: \"kubernetes.io/projected/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-kube-api-access-9s6qt\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.125980 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-httpd-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126026 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-public-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126080 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-ovndb-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126103 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-internal-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126182 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.126312 4593 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/10756552-28da-4e84-9c43-fb2be288e81f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.150869 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts" (OuterVolumeSpecName: "scripts") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.162419 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.173141 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5" (OuterVolumeSpecName: "kube-api-access-smgc5") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "kube-api-access-smgc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234584 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234718 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-combined-ca-bundle\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234747 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s6qt\" (UniqueName: \"kubernetes.io/projected/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-kube-api-access-9s6qt\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-httpd-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234832 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-public-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234865 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-ovndb-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234884 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-internal-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234958 4593 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234969 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.234980 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smgc5\" (UniqueName: \"kubernetes.io/projected/10756552-28da-4e84-9c43-fb2be288e81f-kube-api-access-smgc5\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.246929 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-internal-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.260145 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-combined-ca-bundle\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.269763 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.277512 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-public-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.277515 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-ovndb-tls-certs\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.286168 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-httpd-config\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.318529 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s6qt\" (UniqueName: \"kubernetes.io/projected/174d0d16-4c6e-403a-bf10-0a69b4e98fb1-kube-api-access-9s6qt\") pod \"neutron-84867bd7b9-4vrb9\" (UID: \"174d0d16-4c6e-403a-bf10-0a69b4e98fb1\") " pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.348027 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.380457 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.445683 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.467001 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"10756552-28da-4e84-9c43-fb2be288e81f","Type":"ContainerDied","Data":"966232a0b0262262a982b33e0fb01619e0942fc49fb0be06397f90be642babf0"} Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.467051 4593 scope.go:117] "RemoveContainer" containerID="24897273abec623fff6c526f0b856b7cfaaa9ed18d3e576b618b0daab55ab047" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.467175 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.479368 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k4l8n_9194cbfb-27b9-47e8-90eb-64b9391d0b07/registry-server/0.log" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.495839 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9" exitCode=137 Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.496527 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9"} Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.504866 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data" (OuterVolumeSpecName: "config-data") pod "10756552-28da-4e84-9c43-fb2be288e81f" (UID: "10756552-28da-4e84-9c43-fb2be288e81f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.524924 4593 scope.go:117] "RemoveContainer" containerID="49c7f116f6b968b8e92002d04be3944f190deaba5cfb0c87a84ff79e7f77d0cb" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.547817 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10756552-28da-4e84-9c43-fb2be288e81f-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.844024 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.868767 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907032 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:02 crc kubenswrapper[4593]: E0129 11:18:02.907404 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="probe" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907422 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="probe" Jan 29 11:18:02 crc kubenswrapper[4593]: E0129 11:18:02.907442 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907448 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907617 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="cinder-scheduler" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.907662 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="10756552-28da-4e84-9c43-fb2be288e81f" containerName="probe" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.908528 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.914194 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 29 11:18:02 crc kubenswrapper[4593]: I0129 11:18:02.930139 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.060863 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5516e5e9-a6e4-4877-bd34-af4128cc7e33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.060938 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-scripts\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.061012 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.061064 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.061135 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.061200 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfmp\" (UniqueName: \"kubernetes.io/projected/5516e5e9-a6e4-4877-bd34-af4128cc7e33-kube-api-access-hjfmp\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.103085 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10756552-28da-4e84-9c43-fb2be288e81f" path="/var/lib/kubelet/pods/10756552-28da-4e84-9c43-fb2be288e81f/volumes" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163201 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163278 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163349 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163398 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjfmp\" (UniqueName: \"kubernetes.io/projected/5516e5e9-a6e4-4877-bd34-af4128cc7e33-kube-api-access-hjfmp\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163481 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5516e5e9-a6e4-4877-bd34-af4128cc7e33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.163517 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-scripts\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.169615 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.169727 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5516e5e9-a6e4-4877-bd34-af4128cc7e33-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.170280 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-scripts\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.176341 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.195181 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5516e5e9-a6e4-4877-bd34-af4128cc7e33-config-data\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.195985 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjfmp\" (UniqueName: \"kubernetes.io/projected/5516e5e9-a6e4-4877-bd34-af4128cc7e33-kube-api-access-hjfmp\") pod \"cinder-scheduler-0\" (UID: \"5516e5e9-a6e4-4877-bd34-af4128cc7e33\") " pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.326581 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.345208 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84867bd7b9-4vrb9"] Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.433859 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.546422 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84867bd7b9-4vrb9" event={"ID":"174d0d16-4c6e-403a-bf10-0a69b4e98fb1","Type":"ContainerStarted","Data":"abb0936fdc501c6fd66d807c5b1109e1663f7b99c3b19651569fcf3b3fd0d74b"} Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.557340 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k4l8n_9194cbfb-27b9-47e8-90eb-64b9391d0b07/registry-server/0.log" Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.561864 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95"} Jan 29 11:18:03 crc kubenswrapper[4593]: I0129 11:18:03.926524 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.580271 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84867bd7b9-4vrb9" event={"ID":"174d0d16-4c6e-403a-bf10-0a69b4e98fb1","Type":"ContainerStarted","Data":"2acb4fa35d4afa0e84525e5f6be668bf1ac762b1e2bd13f1644f9ec69cb6cf3d"} Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.581228 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84867bd7b9-4vrb9" event={"ID":"174d0d16-4c6e-403a-bf10-0a69b4e98fb1","Type":"ContainerStarted","Data":"4f058cbdce9737012ff485ff8ec301e5a9e74f34b759b32bb8eae25cca8f5acc"} Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.581899 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.584827 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5516e5e9-a6e4-4877-bd34-af4128cc7e33","Type":"ContainerStarted","Data":"3168e5f76687d9beb56498941ffa703bdf21d9536851728a69bc369fa9efead7"} Jan 29 11:18:04 crc kubenswrapper[4593]: I0129 11:18:04.615538 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84867bd7b9-4vrb9" podStartSLOduration=3.615514932 podStartE2EDuration="3.615514932s" podCreationTimestamp="2026-01-29 11:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:04.609077079 +0000 UTC m=+1150.482111270" watchObservedRunningTime="2026-01-29 11:18:04.615514932 +0000 UTC m=+1150.488549123" Jan 29 11:18:05 crc kubenswrapper[4593]: I0129 11:18:05.622469 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5516e5e9-a6e4-4877-bd34-af4128cc7e33","Type":"ContainerStarted","Data":"351b4877f3dbb97ff5c9c41efa352d54dba91cf00802c322b48a40cd15d9e957"} Jan 29 11:18:06 crc kubenswrapper[4593]: I0129 11:18:06.637099 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5516e5e9-a6e4-4877-bd34-af4128cc7e33","Type":"ContainerStarted","Data":"6312f89bb42170d2ee932fb1e176e775bca45bca9b1af753eb54b2a689086c06"} Jan 29 11:18:06 crc kubenswrapper[4593]: I0129 11:18:06.660721 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.660700719 podStartE2EDuration="4.660700719s" podCreationTimestamp="2026-01-29 11:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:06.653223079 +0000 UTC m=+1152.526257270" watchObservedRunningTime="2026-01-29 11:18:06.660700719 +0000 UTC m=+1152.533734910" Jan 29 11:18:07 crc kubenswrapper[4593]: I0129 11:18:07.865816 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:18:07 crc kubenswrapper[4593]: I0129 11:18:07.950284 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:18:07 crc kubenswrapper[4593]: I0129 11:18:07.950778 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="dnsmasq-dns" containerID="cri-o://a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" gracePeriod=10 Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.326920 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.480267 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.545166 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.571314 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.609313 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.609358 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.609378 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.610468 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.610784 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.610816 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") pod \"cad93c02-cde3-4a50-9f89-1800d0436d2d\" (UID: \"cad93c02-cde3-4a50-9f89-1800d0436d2d\") " Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.669886 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96" (OuterVolumeSpecName: "kube-api-access-cwb96") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "kube-api-access-cwb96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708046 4593 generic.go:334] "Generic (PLEG): container finished" podID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerID="a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" exitCode=0 Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708827 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708889 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerDied","Data":"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73"} Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708914 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb" event={"ID":"cad93c02-cde3-4a50-9f89-1800d0436d2d","Type":"ContainerDied","Data":"564ff28580e51f15a586a4b36ebebac1a1de37d8a71b76aea863a2b018150e6b"} Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.708931 4593 scope.go:117] "RemoveContainer" containerID="a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.717714 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwb96\" (UniqueName: \"kubernetes.io/projected/cad93c02-cde3-4a50-9f89-1800d0436d2d-kube-api-access-cwb96\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.767330 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.807953 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.808177 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config" (OuterVolumeSpecName: "config") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.818811 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.820040 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.820058 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.820069 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.820078 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.832533 4593 scope.go:117] "RemoveContainer" containerID="b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.859068 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cad93c02-cde3-4a50-9f89-1800d0436d2d" (UID: "cad93c02-cde3-4a50-9f89-1800d0436d2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.881731 4593 scope.go:117] "RemoveContainer" containerID="a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" Jan 29 11:18:08 crc kubenswrapper[4593]: E0129 11:18:08.884999 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73\": container with ID starting with a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73 not found: ID does not exist" containerID="a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.885195 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73"} err="failed to get container status \"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73\": rpc error: code = NotFound desc = could not find container \"a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73\": container with ID starting with a493fd10106184253e493388b4dfa71c635ecf5329b1a15c3ccde9fe523d1e73 not found: ID does not exist" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.885358 4593 scope.go:117] "RemoveContainer" containerID="b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5" Jan 29 11:18:08 crc kubenswrapper[4593]: E0129 11:18:08.886938 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5\": container with ID starting with b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5 not found: ID does not exist" containerID="b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.886991 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5"} err="failed to get container status \"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5\": rpc error: code = NotFound desc = could not find container \"b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5\": container with ID starting with b5db7de407f29070d58723bcbb491e8220b21b0f76aba938e6b5ac7b8b233fc5 not found: ID does not exist" Jan 29 11:18:08 crc kubenswrapper[4593]: I0129 11:18:08.921309 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad93c02-cde3-4a50-9f89-1800d0436d2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:09 crc kubenswrapper[4593]: I0129 11:18:09.049705 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:18:09 crc kubenswrapper[4593]: I0129 11:18:09.056685 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc8b5d5c5-2q2qb"] Jan 29 11:18:09 crc kubenswrapper[4593]: I0129 11:18:09.091612 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" path="/var/lib/kubelet/pods/cad93c02-cde3-4a50-9f89-1800d0436d2d/volumes" Jan 29 11:18:10 crc kubenswrapper[4593]: I0129 11:18:10.053828 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:18:10 crc kubenswrapper[4593]: I0129 11:18:10.053869 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:18:11 crc kubenswrapper[4593]: I0129 11:18:11.104599 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:11 crc kubenswrapper[4593]: > Jan 29 11:18:11 crc kubenswrapper[4593]: I0129 11:18:11.741303 4593 generic.go:334] "Generic (PLEG): container finished" podID="95847704-1027-4518-9f5c-cd663496b804" containerID="532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac" exitCode=137 Jan 29 11:18:11 crc kubenswrapper[4593]: I0129 11:18:11.741533 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerDied","Data":"532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac"} Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.955332 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.956010 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="sg-core" containerID="cri-o://b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7" gracePeriod=30 Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.956017 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-notification-agent" containerID="cri-o://aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26" gracePeriod=30 Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.956198 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="proxy-httpd" containerID="cri-o://88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928" gracePeriod=30 Jan 29 11:18:12 crc kubenswrapper[4593]: I0129 11:18:12.956272 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-central-agent" containerID="cri-o://a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67" gracePeriod=30 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771191 4593 generic.go:334] "Generic (PLEG): container finished" podID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerID="88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928" exitCode=0 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771222 4593 generic.go:334] "Generic (PLEG): container finished" podID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerID="b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7" exitCode=2 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771232 4593 generic.go:334] "Generic (PLEG): container finished" podID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerID="aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26" exitCode=0 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771241 4593 generic.go:334] "Generic (PLEG): container finished" podID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerID="a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67" exitCode=0 Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771260 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928"} Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771286 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7"} Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771295 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26"} Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.771303 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67"} Jan 29 11:18:13 crc kubenswrapper[4593]: I0129 11:18:13.791766 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 29 11:18:14 crc kubenswrapper[4593]: I0129 11:18:14.771458 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:14 crc kubenswrapper[4593]: I0129 11:18:14.772008 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-log" containerID="cri-o://d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64" gracePeriod=30 Jan 29 11:18:14 crc kubenswrapper[4593]: I0129 11:18:14.772282 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-httpd" containerID="cri-o://964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc" gracePeriod=30 Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.692428 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-58d6d94967-wdzcg"] Jan 29 11:18:15 crc kubenswrapper[4593]: E0129 11:18:15.692911 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="init" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.692932 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="init" Jan 29 11:18:15 crc kubenswrapper[4593]: E0129 11:18:15.692961 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="dnsmasq-dns" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.692971 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="dnsmasq-dns" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.693199 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad93c02-cde3-4a50-9f89-1800d0436d2d" containerName="dnsmasq-dns" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.695909 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.698767 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.699292 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.699500 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.719699 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58d6d94967-wdzcg"] Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761526 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-combined-ca-bundle\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761595 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-log-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761721 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6624\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-kube-api-access-x6624\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761792 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-run-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761858 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-internal-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-public-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.761994 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-config-data\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.762130 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-etc-swift\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.803856 4593 generic.go:334] "Generic (PLEG): container finished" podID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerID="d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64" exitCode=143 Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.803944 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerDied","Data":"d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64"} Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.810262 4593 generic.go:334] "Generic (PLEG): container finished" podID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerID="a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996" exitCode=137 Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.810386 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996"} Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.827270 4593 generic.go:334] "Generic (PLEG): container finished" podID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerID="948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1" exitCode=137 Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.827331 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerDied","Data":"948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1"} Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.863617 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-config-data\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.863689 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-etc-swift\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865273 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-combined-ca-bundle\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865324 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-log-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865409 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6624\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-kube-api-access-x6624\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865451 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-run-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865511 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-internal-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.865577 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-public-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.867344 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-run-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.867994 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1bc6621-0892-452c-9f95-54554f8c6e68-log-httpd\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.872382 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-combined-ca-bundle\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.873213 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-etc-swift\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.873672 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-internal-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.874059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-config-data\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.874781 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1bc6621-0892-452c-9f95-54554f8c6e68-public-tls-certs\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:15 crc kubenswrapper[4593]: I0129 11:18:15.892511 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6624\" (UniqueName: \"kubernetes.io/projected/f1bc6621-0892-452c-9f95-54554f8c6e68-kube-api-access-x6624\") pod \"swift-proxy-58d6d94967-wdzcg\" (UID: \"f1bc6621-0892-452c-9f95-54554f8c6e68\") " pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:16 crc kubenswrapper[4593]: I0129 11:18:16.020058 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.310009 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.469848 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.470162 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-log" containerID="cri-o://90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3" gracePeriod=30 Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.470277 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-httpd" containerID="cri-o://3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5" gracePeriod=30 Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.901217 4593 generic.go:334] "Generic (PLEG): container finished" podID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerID="964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc" exitCode=0 Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.901286 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerDied","Data":"964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc"} Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.905405 4593 generic.go:334] "Generic (PLEG): container finished" podID="7289daaa-acda-4854-a506-c6cc429562d3" containerID="90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3" exitCode=143 Jan 29 11:18:18 crc kubenswrapper[4593]: I0129 11:18:18.905434 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerDied","Data":"90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3"} Jan 29 11:18:20 crc kubenswrapper[4593]: E0129 11:18:20.564236 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 29 11:18:20 crc kubenswrapper[4593]: E0129 11:18:20.564749 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchcdhb4h5bch5dfh66bh54fhb5hc9h5f4h5b8h5h665h69h74h68ch5f6hb6h546h79h76h5c9h6ch68ch89hf4h4h4h76h9h58bh65q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgpjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(220bdfcb-98c4-4c78-8d95-ea64edfaf1ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:18:20 crc kubenswrapper[4593]: E0129 11:18:20.565984 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="220bdfcb-98c4-4c78-8d95-ea64edfaf1ab" Jan 29 11:18:20 crc kubenswrapper[4593]: I0129 11:18:20.988259 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"95847704-1027-4518-9f5c-cd663496b804","Type":"ContainerDied","Data":"5819a6ffae38a266d2b0e8c7f0f4a9a9ec8806aff42d69e8d72319628c862e12"} Jan 29 11:18:20 crc kubenswrapper[4593]: I0129 11:18:20.988492 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5819a6ffae38a266d2b0e8c7f0f4a9a9ec8806aff42d69e8d72319628c862e12" Jan 29 11:18:20 crc kubenswrapper[4593]: E0129 11:18:20.992858 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="220bdfcb-98c4-4c78-8d95-ea64edfaf1ab" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.051979 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.138477 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:21 crc kubenswrapper[4593]: > Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174058 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174160 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174199 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174296 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174319 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.174389 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") pod \"95847704-1027-4518-9f5c-cd663496b804\" (UID: \"95847704-1027-4518-9f5c-cd663496b804\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.177820 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.181241 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs" (OuterVolumeSpecName: "logs") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.184862 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts" (OuterVolumeSpecName: "scripts") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.190821 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.190898 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x" (OuterVolumeSpecName: "kube-api-access-sg67x") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "kube-api-access-sg67x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.260426 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280837 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280873 4593 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280884 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg67x\" (UniqueName: \"kubernetes.io/projected/95847704-1027-4518-9f5c-cd663496b804-kube-api-access-sg67x\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280892 4593 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/95847704-1027-4518-9f5c-cd663496b804-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280900 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.280908 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95847704-1027-4518-9f5c-cd663496b804-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.338382 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data" (OuterVolumeSpecName: "config-data") pod "95847704-1027-4518-9f5c-cd663496b804" (UID: "95847704-1027-4518-9f5c-cd663496b804"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.373364 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.384392 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95847704-1027-4518-9f5c-cd663496b804-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485733 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485840 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485882 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485926 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.485980 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.486029 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.486057 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") pod \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\" (UID: \"852a4805-5ddc-4a1d-a642-9d5e6bbb9206\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.488422 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.488912 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.497490 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts" (OuterVolumeSpecName: "scripts") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.509913 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8" (OuterVolumeSpecName: "kube-api-access-6bdq8") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "kube-api-access-6bdq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.572937 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590866 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590900 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590908 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bdq8\" (UniqueName: \"kubernetes.io/projected/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-kube-api-access-6bdq8\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590919 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.590928 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.646918 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.693881 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.731354 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.785797 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data" (OuterVolumeSpecName: "config-data") pod "852a4805-5ddc-4a1d-a642-9d5e6bbb9206" (UID: "852a4805-5ddc-4a1d-a642-9d5e6bbb9206"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.794835 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.794919 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.794985 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795010 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795117 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795194 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795284 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795358 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") pod \"911edffc-f4d0-40bf-b49c-c1ab592dd258\" (UID: \"911edffc-f4d0-40bf-b49c-c1ab592dd258\") " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.795859 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/852a4805-5ddc-4a1d-a642-9d5e6bbb9206-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.796474 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.797034 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs" (OuterVolumeSpecName: "logs") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.809165 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts" (OuterVolumeSpecName: "scripts") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.813561 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.828970 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2" (OuterVolumeSpecName: "kube-api-access-z9nh2") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "kube-api-access-z9nh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.875037 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.899795 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900049 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900145 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900218 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/911edffc-f4d0-40bf-b49c-c1ab592dd258-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900283 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9nh2\" (UniqueName: \"kubernetes.io/projected/911edffc-f4d0-40bf-b49c-c1ab592dd258-kube-api-access-z9nh2\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.900347 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.912803 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.929845 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data" (OuterVolumeSpecName: "config-data") pod "911edffc-f4d0-40bf-b49c-c1ab592dd258" (UID: "911edffc-f4d0-40bf-b49c-c1ab592dd258"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:21 crc kubenswrapper[4593]: I0129 11:18:21.934264 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.015378 4593 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.015599 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.015713 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/911edffc-f4d0-40bf-b49c-c1ab592dd258-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.094206 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af"} Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.114856 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"852a4805-5ddc-4a1d-a642-9d5e6bbb9206","Type":"ContainerDied","Data":"0eb50a3ac1f633cc99edb2df912ed9ee0643f4c8b02ce477d7d327cbda5af774"} Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.114919 4593 scope.go:117] "RemoveContainer" containerID="88b868d7da96b6b3e10186188d5bbc939be24d322cd5116219ae0adb17dbd928" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.115105 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.147678 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-58d6d94967-wdzcg"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.185212 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972"} Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.215269 4593 scope.go:117] "RemoveContainer" containerID="b0abb69f5e56bccd2bb62baeb61fd064ee7010eb36ba3b37edb2c69864a733d7" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.215526 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.217779 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.218169 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.218229 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"911edffc-f4d0-40bf-b49c-c1ab592dd258","Type":"ContainerDied","Data":"4bb371c1c9d2fcc4f80bfb03ebb66d3dd6167a7190179617153d4df635eb3592"} Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.283721 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.313397 4593 scope.go:117] "RemoveContainer" containerID="aba14bdcb819b3097f623b10d1f889520b4a3ec8b94a23129679074b0158bb26" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.353796 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354230 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="sg-core" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354249 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="sg-core" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354261 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-central-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354267 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-central-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354290 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354298 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354317 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354324 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api-log" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354339 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="proxy-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354356 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="proxy-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354368 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-notification-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354374 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-notification-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354386 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354392 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" Jan 29 11:18:22 crc kubenswrapper[4593]: E0129 11:18:22.354404 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354411 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354567 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354580 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="proxy-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354588 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-log" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354603 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="sg-core" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354614 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-central-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354625 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" containerName="ceilometer-notification-agent" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354666 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" containerName="glance-httpd" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.354675 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.356189 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.360392 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.360624 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.385215 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.403679 4593 scope.go:117] "RemoveContainer" containerID="a9f1fe703de62c9906cf5414628cb1871967b692dd15c7ec296d4900c7151a67" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.411690 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431456 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431536 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431560 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431587 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431602 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431656 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.431696 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.433882 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.458704 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.482166 4593 scope.go:117] "RemoveContainer" containerID="964d34df183e187ec805f4ff554355a6b6ef2fc5d1f44b5ea4d74d26a5c58cdc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.492718 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.526250 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.527825 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.530189 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534607 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534700 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534724 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534756 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534771 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534834 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.534878 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.539332 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.540712 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.541203 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.549619 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.551240 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.565257 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.581608 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.592204 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") pod \"ceilometer-0\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636178 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636246 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636288 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636321 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636351 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636384 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636416 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdcdw\" (UniqueName: \"kubernetes.io/projected/c4f0192e-509d-46a4-9a2a-c82106019381-kube-api-access-gdcdw\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.636462 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.664669 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.666309 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.686574 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.687336 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.687491 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.704881 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.739861 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.748505 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.751765 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.751878 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.751967 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcsqs\" (UniqueName: \"kubernetes.io/projected/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-kube-api-access-tcsqs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752002 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752059 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-scripts\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752112 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdcdw\" (UniqueName: \"kubernetes.io/projected/c4f0192e-509d-46a4-9a2a-c82106019381-kube-api-access-gdcdw\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752269 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752314 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752345 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752406 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752498 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752553 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752601 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-logs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752701 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752736 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.752776 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.764219 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.766256 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.768945 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.769918 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4f0192e-509d-46a4-9a2a-c82106019381-logs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.771336 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.775725 4593 scope.go:117] "RemoveContainer" containerID="d9d7dd8976380d6486fd1b5f21789a9b38a5817e8ac2103c8d17ab8df8f5fe64" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.779894 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.781819 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.787291 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f0192e-509d-46a4-9a2a-c82106019381-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.813299 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdcdw\" (UniqueName: \"kubernetes.io/projected/c4f0192e-509d-46a4-9a2a-c82106019381-kube-api-access-gdcdw\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857012 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857075 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857137 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857165 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-logs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857194 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857214 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857235 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857281 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcsqs\" (UniqueName: \"kubernetes.io/projected/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-kube-api-access-tcsqs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.857311 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-scripts\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.858873 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-logs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.866734 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.876051 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.877958 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-scripts\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.878432 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.885250 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.885364 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.901094 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-config-data\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.904772 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c4f0192e-509d-46a4-9a2a-c82106019381\") " pod="openstack/glance-default-internal-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.905863 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcsqs\" (UniqueName: \"kubernetes.io/projected/c7ea14af-5b7c-44d6-a34c-1a344bfc96ef-kube-api-access-tcsqs\") pod \"cinder-api-0\" (UID: \"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef\") " pod="openstack/cinder-api-0" Jan 29 11:18:22 crc kubenswrapper[4593]: I0129 11:18:22.932448 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.033552 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.111035 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852a4805-5ddc-4a1d-a642-9d5e6bbb9206" path="/var/lib/kubelet/pods/852a4805-5ddc-4a1d-a642-9d5e6bbb9206/volumes" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.112166 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911edffc-f4d0-40bf-b49c-c1ab592dd258" path="/var/lib/kubelet/pods/911edffc-f4d0-40bf-b49c-c1ab592dd258/volumes" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.113559 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95847704-1027-4518-9f5c-cd663496b804" path="/var/lib/kubelet/pods/95847704-1027-4518-9f5c-cd663496b804/volumes" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.297808 4593 generic.go:334] "Generic (PLEG): container finished" podID="7289daaa-acda-4854-a506-c6cc429562d3" containerID="3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5" exitCode=0 Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.298180 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerDied","Data":"3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5"} Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.312677 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="95847704-1027-4518-9f5c-cd663496b804" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.163:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.330080 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58d6d94967-wdzcg" event={"ID":"f1bc6621-0892-452c-9f95-54554f8c6e68","Type":"ContainerStarted","Data":"3a88b331aa6b8c8e95781edc38ffd4762f674838fa864fc8b53fe87a5d08785f"} Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.330152 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58d6d94967-wdzcg" event={"ID":"f1bc6621-0892-452c-9f95-54554f8c6e68","Type":"ContainerStarted","Data":"922c276c74b50b8fe632937198b2477a6a9b17d827dc74f6a75da896a0452cf2"} Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.476789 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.491702 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.492689 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.492797 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.492896 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.493207 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.493481 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.493606 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.494038 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") pod \"7289daaa-acda-4854-a506-c6cc429562d3\" (UID: \"7289daaa-acda-4854-a506-c6cc429562d3\") " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.494983 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.500901 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs" (OuterVolumeSpecName: "logs") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.532293 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts" (OuterVolumeSpecName: "scripts") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.534856 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg" (OuterVolumeSpecName: "kube-api-access-p5xlg") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "kube-api-access-p5xlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.600563 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5xlg\" (UniqueName: \"kubernetes.io/projected/7289daaa-acda-4854-a506-c6cc429562d3-kube-api-access-p5xlg\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.602220 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.602342 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7289daaa-acda-4854-a506-c6cc429562d3-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.602426 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.609560 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.688599 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:23 crc kubenswrapper[4593]: W0129 11:18:23.701378 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78ec86eb_f94b_4f7f_83f0_30c10fd87869.slice/crio-711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5 WatchSource:0}: Error finding container 711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5: Status 404 returned error can't find the container with id 711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5 Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.705389 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.737027 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.808111 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.836900 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.837308 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.839975 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data" (OuterVolumeSpecName: "config-data") pod "7289daaa-acda-4854-a506-c6cc429562d3" (UID: "7289daaa-acda-4854-a506-c6cc429562d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.910891 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.910919 4593 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:23 crc kubenswrapper[4593]: I0129 11:18:23.910929 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7289daaa-acda-4854-a506-c6cc429562d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.017601 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.185545 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.359656 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7289daaa-acda-4854-a506-c6cc429562d3","Type":"ContainerDied","Data":"db9797e87c1781dc943e7d1006dfa6fe3eaaf5edc0bffd04dc66ed3f512449a4"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.359706 4593 scope.go:117] "RemoveContainer" containerID="3293c2e1edd54e8ff7f4dc2cefd7cf058a429e32cd917bd68da12dc400ead3f5" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.359851 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.370443 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4f0192e-509d-46a4-9a2a-c82106019381","Type":"ContainerStarted","Data":"611919187e7b8eab13192430a2187608d9df802c0e23e7889c0cb34217e85d57"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.381335 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.389132 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-58d6d94967-wdzcg" event={"ID":"f1bc6621-0892-452c-9f95-54554f8c6e68","Type":"ContainerStarted","Data":"f28d6a5450433f62a199b081a96fe4301a0493157d5b32a045c0f3fd0f981f35"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.390107 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.390144 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.414558 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.426994 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef","Type":"ContainerStarted","Data":"9b160d8d81e046e2cdee4c9713209e91ef7045d98f9716a3994e613efb141f42"} Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.434547 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.453871 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-58d6d94967-wdzcg" podStartSLOduration=9.453840316 podStartE2EDuration="9.453840316s" podCreationTimestamp="2026-01-29 11:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:24.441537122 +0000 UTC m=+1170.314571313" watchObservedRunningTime="2026-01-29 11:18:24.453840316 +0000 UTC m=+1170.326874507" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.478725 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: E0129 11:18:24.479407 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-httpd" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.479434 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-httpd" Jan 29 11:18:24 crc kubenswrapper[4593]: E0129 11:18:24.479486 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-log" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.479497 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-log" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.479700 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-log" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.479722 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7289daaa-acda-4854-a506-c6cc429562d3" containerName="glance-httpd" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.481217 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.485698 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.486067 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530160 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-config-data\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530204 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530484 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-logs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530564 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530596 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530656 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530759 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-scripts\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.530809 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h42x\" (UniqueName: \"kubernetes.io/projected/43872652-3bb2-4a5c-9b13-cb25d625cd01-kube-api-access-7h42x\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.569763 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.593166 4593 scope.go:117] "RemoveContainer" containerID="90fb85235bc3606a7b4bb84b4b179cef3fafc0ce2eb0f3b29c3cc2eb08fb78b3" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635110 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-logs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635192 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635226 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635256 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635326 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-scripts\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635363 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h42x\" (UniqueName: \"kubernetes.io/projected/43872652-3bb2-4a5c-9b13-cb25d625cd01-kube-api-access-7h42x\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635466 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-config-data\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635495 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635496 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.635725 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.638511 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43872652-3bb2-4a5c-9b13-cb25d625cd01-logs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.646007 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-scripts\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.668469 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h42x\" (UniqueName: \"kubernetes.io/projected/43872652-3bb2-4a5c-9b13-cb25d625cd01-kube-api-access-7h42x\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.678670 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.740609 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.742072 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43872652-3bb2-4a5c-9b13-cb25d625cd01-config-data\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.769260 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"43872652-3bb2-4a5c-9b13-cb25d625cd01\") " pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.866501 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.909479 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:18:24 crc kubenswrapper[4593]: I0129 11:18:24.910453 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:18:25 crc kubenswrapper[4593]: I0129 11:18:25.049700 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:18:25 crc kubenswrapper[4593]: I0129 11:18:25.050712 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:18:25 crc kubenswrapper[4593]: I0129 11:18:25.125239 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7289daaa-acda-4854-a506-c6cc429562d3" path="/var/lib/kubelet/pods/7289daaa-acda-4854-a506-c6cc429562d3/volumes" Jan 29 11:18:25 crc kubenswrapper[4593]: I0129 11:18:25.920936 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 29 11:18:25 crc kubenswrapper[4593]: W0129 11:18:25.931971 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43872652_3bb2_4a5c_9b13_cb25d625cd01.slice/crio-f8bc87ac1147d47e54367d1feb5ba989a8c026f389393b513858ddbd441d28a8 WatchSource:0}: Error finding container f8bc87ac1147d47e54367d1feb5ba989a8c026f389393b513858ddbd441d28a8: Status 404 returned error can't find the container with id f8bc87ac1147d47e54367d1feb5ba989a8c026f389393b513858ddbd441d28a8 Jan 29 11:18:26 crc kubenswrapper[4593]: I0129 11:18:26.524146 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba"} Jan 29 11:18:26 crc kubenswrapper[4593]: I0129 11:18:26.525338 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef","Type":"ContainerStarted","Data":"0f4ba927110e42f4575d57fa22b020fb5f291b538c1cb3b4b67bbdeb4239900e"} Jan 29 11:18:26 crc kubenswrapper[4593]: I0129 11:18:26.526371 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4f0192e-509d-46a4-9a2a-c82106019381","Type":"ContainerStarted","Data":"99ffedd87bdad963c0fac83d916ef7a3dfa821991254407dd583eb4da850308a"} Jan 29 11:18:26 crc kubenswrapper[4593]: I0129 11:18:26.528087 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43872652-3bb2-4a5c-9b13-cb25d625cd01","Type":"ContainerStarted","Data":"f8bc87ac1147d47e54367d1feb5ba989a8c026f389393b513858ddbd441d28a8"} Jan 29 11:18:27 crc kubenswrapper[4593]: I0129 11:18:27.547028 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c4f0192e-509d-46a4-9a2a-c82106019381","Type":"ContainerStarted","Data":"4c05dcb5cd7f81485fe4d9e1347db0f5e68c055073e01d377db0e1d469245ae3"} Jan 29 11:18:27 crc kubenswrapper[4593]: I0129 11:18:27.551475 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf"} Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.037145 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58d6d94967-wdzcg" podUID="f1bc6621-0892-452c-9f95-54554f8c6e68" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.045722 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-58d6d94967-wdzcg" podUID="f1bc6621-0892-452c-9f95-54554f8c6e68" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.227129 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.428088 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.428386 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1512a75d-a403-420b-a9be-f931faf1273a" containerName="kube-state-metrics" containerID="cri-o://86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" gracePeriod=30 Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.585538 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43872652-3bb2-4a5c-9b13-cb25d625cd01","Type":"ContainerStarted","Data":"544524f295bee87031c7a71defb576c27cc4dcaa1ba684a41c30b9be9bac1142"} Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.599370 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996"} Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.606960 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7ea14af-5b7c-44d6-a34c-1a344bfc96ef","Type":"ContainerStarted","Data":"9231199f33065bde95f80e5a36be406be2d308a0f6901f81b0b5c94971e920e5"} Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.606998 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.751216 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.751192701 podStartE2EDuration="6.751192701s" podCreationTimestamp="2026-01-29 11:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:28.73492664 +0000 UTC m=+1174.607960831" watchObservedRunningTime="2026-01-29 11:18:28.751192701 +0000 UTC m=+1174.624226892" Jan 29 11:18:28 crc kubenswrapper[4593]: I0129 11:18:28.805840 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.8058103899999995 podStartE2EDuration="6.80581039s" podCreationTimestamp="2026-01-29 11:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:28.780713771 +0000 UTC m=+1174.653747972" watchObservedRunningTime="2026-01-29 11:18:28.80581039 +0000 UTC m=+1174.678844581" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.305038 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.375358 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") pod \"1512a75d-a403-420b-a9be-f931faf1273a\" (UID: \"1512a75d-a403-420b-a9be-f931faf1273a\") " Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.383035 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2" (OuterVolumeSpecName: "kube-api-access-fsks2") pod "1512a75d-a403-420b-a9be-f931faf1273a" (UID: "1512a75d-a403-420b-a9be-f931faf1273a"). InnerVolumeSpecName "kube-api-access-fsks2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.477999 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsks2\" (UniqueName: \"kubernetes.io/projected/1512a75d-a403-420b-a9be-f931faf1273a-kube-api-access-fsks2\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.653588 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"43872652-3bb2-4a5c-9b13-cb25d625cd01","Type":"ContainerStarted","Data":"49e8574d8790b66f47ddc46c109214e8113927c00b90278fd8fb5f822d2ca25c"} Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670334 4593 generic.go:334] "Generic (PLEG): container finished" podID="1512a75d-a403-420b-a9be-f931faf1273a" containerID="86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" exitCode=2 Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670442 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1512a75d-a403-420b-a9be-f931faf1273a","Type":"ContainerDied","Data":"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4"} Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1512a75d-a403-420b-a9be-f931faf1273a","Type":"ContainerDied","Data":"a9c985edeb4a844ebb330990ed11e56a44761422347a56b0c3bd545f3f8f0fc2"} Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670539 4593 scope.go:117] "RemoveContainer" containerID="86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.670837 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.693292 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.693264353 podStartE2EDuration="5.693264353s" podCreationTimestamp="2026-01-29 11:18:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:18:29.681132245 +0000 UTC m=+1175.554166436" watchObservedRunningTime="2026-01-29 11:18:29.693264353 +0000 UTC m=+1175.566298544" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.735106 4593 scope.go:117] "RemoveContainer" containerID="86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" Jan 29 11:18:29 crc kubenswrapper[4593]: E0129 11:18:29.738409 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4\": container with ID starting with 86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4 not found: ID does not exist" containerID="86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.738761 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4"} err="failed to get container status \"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4\": rpc error: code = NotFound desc = could not find container \"86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4\": container with ID starting with 86bc440cb31e485f009e115ffa955e35cb29cedb22292b6665d6526a008cafe4 not found: ID does not exist" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.745000 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.760796 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.770709 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:29 crc kubenswrapper[4593]: E0129 11:18:29.776796 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1512a75d-a403-420b-a9be-f931faf1273a" containerName="kube-state-metrics" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.776832 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1512a75d-a403-420b-a9be-f931faf1273a" containerName="kube-state-metrics" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.777057 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1512a75d-a403-420b-a9be-f931faf1273a" containerName="kube-state-metrics" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.777697 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.783617 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.793435 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.794165 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.886788 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.886844 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.886987 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sbk9\" (UniqueName: \"kubernetes.io/projected/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-api-access-4sbk9\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.887261 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.989261 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.989330 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.989382 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sbk9\" (UniqueName: \"kubernetes.io/projected/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-api-access-4sbk9\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:29 crc kubenswrapper[4593]: I0129 11:18:29.989465 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:29.996337 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.007424 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sbk9\" (UniqueName: \"kubernetes.io/projected/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-api-access-4sbk9\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.013237 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.015740 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d0c0ba2-e8ed-4361-8aff-e71714a1617c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6d0c0ba2-e8ed-4361-8aff-e71714a1617c\") " pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.121253 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 29 11:18:30 crc kubenswrapper[4593]: I0129 11:18:30.748237 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.034179 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.040512 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-58d6d94967-wdzcg" Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.087361 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1512a75d-a403-420b-a9be-f931faf1273a" path="/var/lib/kubelet/pods/1512a75d-a403-420b-a9be-f931faf1273a/volumes" Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.142400 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:31 crc kubenswrapper[4593]: > Jan 29 11:18:31 crc kubenswrapper[4593]: I0129 11:18:31.690018 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6d0c0ba2-e8ed-4361-8aff-e71714a1617c","Type":"ContainerStarted","Data":"005ccb1e86c96c8065cec7df499a3e3c287f9afa66306410ebb021bd06437715"} Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.399527 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84867bd7b9-4vrb9" Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.486128 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.486580 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dc77db4b8-s2bq6" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-api" containerID="cri-o://09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a" gracePeriod=30 Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.487096 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dc77db4b8-s2bq6" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-httpd" containerID="cri-o://ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff" gracePeriod=30 Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.732987 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerStarted","Data":"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51"} Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.733672 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.742587 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6d0c0ba2-e8ed-4361-8aff-e71714a1617c","Type":"ContainerStarted","Data":"1680d182c5e7643ac7fecdecbd039a081e331c0fc039793d441768833bdfb2ad"} Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.743742 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.772161 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.327027013 podStartE2EDuration="10.772130482s" podCreationTimestamp="2026-01-29 11:18:22 +0000 UTC" firstStartedPulling="2026-01-29 11:18:23.716343963 +0000 UTC m=+1169.589378154" lastFinishedPulling="2026-01-29 11:18:30.161447432 +0000 UTC m=+1176.034481623" observedRunningTime="2026-01-29 11:18:32.768980857 +0000 UTC m=+1178.642015048" watchObservedRunningTime="2026-01-29 11:18:32.772130482 +0000 UTC m=+1178.645164673" Jan 29 11:18:32 crc kubenswrapper[4593]: I0129 11:18:32.802148 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.301548546 podStartE2EDuration="3.802129554s" podCreationTimestamp="2026-01-29 11:18:29 +0000 UTC" firstStartedPulling="2026-01-29 11:18:30.757898864 +0000 UTC m=+1176.630933055" lastFinishedPulling="2026-01-29 11:18:32.258479872 +0000 UTC m=+1178.131514063" observedRunningTime="2026-01-29 11:18:32.796434089 +0000 UTC m=+1178.669468280" watchObservedRunningTime="2026-01-29 11:18:32.802129554 +0000 UTC m=+1178.675163745" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.034664 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.034937 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.087489 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.098241 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.754520 4593 generic.go:334] "Generic (PLEG): container finished" podID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerID="ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff" exitCode=0 Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.754601 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerDied","Data":"ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff"} Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.755737 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:33 crc kubenswrapper[4593]: I0129 11:18:33.755760 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.079487 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.671904 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.765763 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-central-agent" containerID="cri-o://8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" gracePeriod=30 Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.766304 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="sg-core" containerID="cri-o://833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" gracePeriod=30 Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.766412 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="proxy-httpd" containerID="cri-o://a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" gracePeriod=30 Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.766442 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-notification-agent" containerID="cri-o://58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" gracePeriod=30 Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.868035 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.868084 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.911577 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.954526 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:18:34 crc kubenswrapper[4593]: I0129 11:18:34.956972 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.051613 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.783859 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"220bdfcb-98c4-4c78-8d95-ea64edfaf1ab","Type":"ContainerStarted","Data":"7186c53b99e322b4e59d65a8c7470388e891fc309cdd4c8518722936e8a9f732"} Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788031 4593 generic.go:334] "Generic (PLEG): container finished" podID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerID="a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" exitCode=0 Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788070 4593 generic.go:334] "Generic (PLEG): container finished" podID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerID="833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" exitCode=2 Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788081 4593 generic.go:334] "Generic (PLEG): container finished" podID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerID="58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" exitCode=0 Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788622 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51"} Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788670 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788684 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996"} Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788694 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf"} Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.788793 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 29 11:18:35 crc kubenswrapper[4593]: I0129 11:18:35.803319 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.906812275 podStartE2EDuration="40.803293458s" podCreationTimestamp="2026-01-29 11:17:55 +0000 UTC" firstStartedPulling="2026-01-29 11:17:56.667922955 +0000 UTC m=+1142.540957146" lastFinishedPulling="2026-01-29 11:18:34.564404138 +0000 UTC m=+1180.437438329" observedRunningTime="2026-01-29 11:18:35.797019908 +0000 UTC m=+1181.670054099" watchObservedRunningTime="2026-01-29 11:18:35.803293458 +0000 UTC m=+1181.676327649" Jan 29 11:18:37 crc kubenswrapper[4593]: I0129 11:18:37.939858 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c7ea14af-5b7c-44d6-a34c-1a344bfc96ef" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.174:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:39 crc kubenswrapper[4593]: I0129 11:18:39.938916 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="c7ea14af-5b7c-44d6-a34c-1a344bfc96ef" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.174:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:40 crc kubenswrapper[4593]: I0129 11:18:40.134099 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.121846 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:41 crc kubenswrapper[4593]: > Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.403673 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444248 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444301 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444430 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444480 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444549 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.444606 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") pod \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\" (UID: \"78ec86eb-f94b-4f7f-83f0-30c10fd87869\") " Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.445256 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.445402 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.455615 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw" (OuterVolumeSpecName: "kube-api-access-xsjrw") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "kube-api-access-xsjrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.460768 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts" (OuterVolumeSpecName: "scripts") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.556701 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.592045 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.592092 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78ec86eb-f94b-4f7f-83f0-30c10fd87869-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.592117 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.592131 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsjrw\" (UniqueName: \"kubernetes.io/projected/78ec86eb-f94b-4f7f-83f0-30c10fd87869-kube-api-access-xsjrw\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.690779 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.694892 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.694976 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.721205 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data" (OuterVolumeSpecName: "config-data") pod "78ec86eb-f94b-4f7f-83f0-30c10fd87869" (UID: "78ec86eb-f94b-4f7f-83f0-30c10fd87869"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.796409 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ec86eb-f94b-4f7f-83f0-30c10fd87869-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852323 4593 generic.go:334] "Generic (PLEG): container finished" podID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerID="8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" exitCode=0 Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852370 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba"} Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852397 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78ec86eb-f94b-4f7f-83f0-30c10fd87869","Type":"ContainerDied","Data":"711c86a21fd2d816293a1020209e105bca7ea576e3a8136db02ca95eb6d35ea5"} Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852418 4593 scope.go:117] "RemoveContainer" containerID="a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.852589 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.882372 4593 scope.go:117] "RemoveContainer" containerID="833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.892078 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.902340 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.918327 4593 scope.go:117] "RemoveContainer" containerID="58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937397 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:41 crc kubenswrapper[4593]: E0129 11:18:41.937864 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-notification-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937889 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-notification-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: E0129 11:18:41.937909 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="sg-core" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937918 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="sg-core" Jan 29 11:18:41 crc kubenswrapper[4593]: E0129 11:18:41.937930 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="proxy-httpd" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937938 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="proxy-httpd" Jan 29 11:18:41 crc kubenswrapper[4593]: E0129 11:18:41.937953 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-central-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.937961 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-central-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.938219 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="proxy-httpd" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.938256 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-central-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.938272 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="ceilometer-notification-agent" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.938288 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" containerName="sg-core" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.940606 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.949529 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.949897 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.954850 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.958582 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:41 crc kubenswrapper[4593]: I0129 11:18:41.968875 4593 scope.go:117] "RemoveContainer" containerID="8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000059 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000104 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000184 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000206 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000243 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000280 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000316 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.000375 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.100904 4593 scope.go:117] "RemoveContainer" containerID="a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.102226 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.102839 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.102937 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.102960 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103015 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103067 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103122 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103267 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.103700 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: E0129 11:18:42.105849 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51\": container with ID starting with a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51 not found: ID does not exist" containerID="a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.105913 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51"} err="failed to get container status \"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51\": rpc error: code = NotFound desc = could not find container \"a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51\": container with ID starting with a0debea9525856c91778e0843228bff0de041b9da05ea78e3bab22062439fe51 not found: ID does not exist" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.105943 4593 scope.go:117] "RemoveContainer" containerID="833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.107152 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: E0129 11:18:42.108692 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996\": container with ID starting with 833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996 not found: ID does not exist" containerID="833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.108738 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996"} err="failed to get container status \"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996\": rpc error: code = NotFound desc = could not find container \"833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996\": container with ID starting with 833d07115f2510a2b5c1750bdceda27a12d681e5f9d78bc2a2a1e6ac0a401996 not found: ID does not exist" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.108769 4593 scope.go:117] "RemoveContainer" containerID="58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" Jan 29 11:18:42 crc kubenswrapper[4593]: E0129 11:18:42.109299 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf\": container with ID starting with 58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf not found: ID does not exist" containerID="58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.109344 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf"} err="failed to get container status \"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf\": rpc error: code = NotFound desc = could not find container \"58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf\": container with ID starting with 58063701298bf589828142a33efbc5e270766b9014738cc0aed3ba734c80bdaf not found: ID does not exist" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.109383 4593 scope.go:117] "RemoveContainer" containerID="8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" Jan 29 11:18:42 crc kubenswrapper[4593]: E0129 11:18:42.109670 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba\": container with ID starting with 8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba not found: ID does not exist" containerID="8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.109699 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba"} err="failed to get container status \"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba\": rpc error: code = NotFound desc = could not find container \"8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba\": container with ID starting with 8adbf800017e7b31f1fd44ae480d6741ac17edd3f5775a9606efde18534450ba not found: ID does not exist" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.109896 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.117414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.118725 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.120249 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.120772 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.146692 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") pod \"ceilometer-0\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.338666 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.856745 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.867403 4593 generic.go:334] "Generic (PLEG): container finished" podID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerID="09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a" exitCode=0 Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.867476 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerDied","Data":"09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a"} Jan 29 11:18:42 crc kubenswrapper[4593]: I0129 11:18:42.987071 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.102788 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ec86eb-f94b-4f7f-83f0-30c10fd87869" path="/var/lib/kubelet/pods/78ec86eb-f94b-4f7f-83f0-30c10fd87869/volumes" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.439561 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536288 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536357 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536437 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536495 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.536654 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") pod \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\" (UID: \"df8e6616-b9af-427f-9daa-d62ee3cb24d3\") " Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.565828 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.568843 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6" (OuterVolumeSpecName: "kube-api-access-mkjl6") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "kube-api-access-mkjl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.613079 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config" (OuterVolumeSpecName: "config") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.644004 4593 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.644338 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkjl6\" (UniqueName: \"kubernetes.io/projected/df8e6616-b9af-427f-9daa-d62ee3cb24d3-kube-api-access-mkjl6\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.644441 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.684375 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.708514 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "df8e6616-b9af-427f-9daa-d62ee3cb24d3" (UID: "df8e6616-b9af-427f-9daa-d62ee3cb24d3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.730884 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.731021 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.734167 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.736693 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.736799 4593 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.740847 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.746298 4593 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.746328 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8e6616-b9af-427f-9daa-d62ee3cb24d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.894434 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d"} Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.894503 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"d8c09b2b8b448508c118e29717b68e3a7cf488c8e6b3318a0fc967d165dd0e86"} Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.912119 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dc77db4b8-s2bq6" event={"ID":"df8e6616-b9af-427f-9daa-d62ee3cb24d3","Type":"ContainerDied","Data":"88035d7e970cd02ad4e71f38ef640ad02fc3f7e36a8669ad9dc26d692493f526"} Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.912171 4593 scope.go:117] "RemoveContainer" containerID="ea1f5b0da7cda5576a556da562bab910500bb22fc10f44670339b87aed033fff" Jan 29 11:18:43 crc kubenswrapper[4593]: I0129 11:18:43.912293 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dc77db4b8-s2bq6" Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.064287 4593 scope.go:117] "RemoveContainer" containerID="09e3428cd83e854d7603f9f23c1fc803bfbc3479156a4044437b5fa34689606a" Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.125021 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.136850 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5dc77db4b8-s2bq6"] Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.911846 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.923786 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1"} Jan 29 11:18:44 crc kubenswrapper[4593]: I0129 11:18:44.944897 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="c7ea14af-5b7c-44d6-a34c-1a344bfc96ef" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.174:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:18:45 crc kubenswrapper[4593]: I0129 11:18:45.050959 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:18:45 crc kubenswrapper[4593]: I0129 11:18:45.085762 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" path="/var/lib/kubelet/pods/df8e6616-b9af-427f-9daa-d62ee3cb24d3/volumes" Jan 29 11:18:45 crc kubenswrapper[4593]: I0129 11:18:45.937720 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e"} Jan 29 11:18:49 crc kubenswrapper[4593]: I0129 11:18:49.980405 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerStarted","Data":"9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731"} Jan 29 11:18:49 crc kubenswrapper[4593]: I0129 11:18:49.981022 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:18:51 crc kubenswrapper[4593]: I0129 11:18:51.112514 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:18:51 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:18:51 crc kubenswrapper[4593]: > Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.430955 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.244013573 podStartE2EDuration="12.43093335s" podCreationTimestamp="2026-01-29 11:18:41 +0000 UTC" firstStartedPulling="2026-01-29 11:18:43.000836153 +0000 UTC m=+1188.873870344" lastFinishedPulling="2026-01-29 11:18:49.18775594 +0000 UTC m=+1195.060790121" observedRunningTime="2026-01-29 11:18:50.016541125 +0000 UTC m=+1195.889575316" watchObservedRunningTime="2026-01-29 11:18:53.43093335 +0000 UTC m=+1199.303967541" Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.437496 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.437877 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-central-agent" containerID="cri-o://b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d" gracePeriod=30 Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.437930 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-notification-agent" containerID="cri-o://c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1" gracePeriod=30 Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.438232 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="proxy-httpd" containerID="cri-o://9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731" gracePeriod=30 Jan 29 11:18:53 crc kubenswrapper[4593]: I0129 11:18:53.437924 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="sg-core" containerID="cri-o://87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e" gracePeriod=30 Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033317 4593 generic.go:334] "Generic (PLEG): container finished" podID="37dd6241-1218-4994-9fa1-75062ec38165" containerID="9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731" exitCode=0 Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033566 4593 generic.go:334] "Generic (PLEG): container finished" podID="37dd6241-1218-4994-9fa1-75062ec38165" containerID="87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e" exitCode=2 Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033606 4593 generic.go:334] "Generic (PLEG): container finished" podID="37dd6241-1218-4994-9fa1-75062ec38165" containerID="c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1" exitCode=0 Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033374 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731"} Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033658 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e"} Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.033669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1"} Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.910516 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.910644 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.911553 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af"} pod="openstack/horizon-fbf566cdb-kbm9z" containerMessage="Container horizon failed startup probe, will be restarted" Jan 29 11:18:54 crc kubenswrapper[4593]: I0129 11:18:54.911604 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" containerID="cri-o://d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af" gracePeriod=30 Jan 29 11:18:55 crc kubenswrapper[4593]: I0129 11:18:55.050012 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:18:55 crc kubenswrapper[4593]: I0129 11:18:55.050119 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:18:55 crc kubenswrapper[4593]: I0129 11:18:55.050981 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972"} pod="openstack/horizon-5bdffb4784-5zp8q" containerMessage="Container horizon failed startup probe, will be restarted" Jan 29 11:18:55 crc kubenswrapper[4593]: I0129 11:18:55.051018 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" containerID="cri-o://b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972" gracePeriod=30 Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.942969 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:18:57 crc kubenswrapper[4593]: E0129 11:18:57.943983 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-httpd" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944004 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-httpd" Jan 29 11:18:57 crc kubenswrapper[4593]: E0129 11:18:57.944020 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-api" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944027 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-api" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944228 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-api" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944255 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8e6616-b9af-427f-9daa-d62ee3cb24d3" containerName="neutron-httpd" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.944969 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.973674 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.973768 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:57 crc kubenswrapper[4593]: I0129 11:18:57.977938 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.044333 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.045711 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.072168 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.077857 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.077976 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.078076 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.078102 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.079231 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.132922 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") pod \"nova-api-db-create-86jg9\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.179316 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.179411 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.180188 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.237422 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") pod \"nova-cell0-db-create-vfj8w\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.243741 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.245101 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.286853 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.288603 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.290465 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-86jg9" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.290897 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.355950 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.378502 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.408668 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.408797 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.408847 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.408907 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.428222 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.510947 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.511043 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.511127 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.511162 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.512245 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.518045 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.538807 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.540246 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.547133 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.561288 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") pod \"nova-api-02db-account-create-update-8h7xj\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.582904 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") pod \"nova-cell1-db-create-vpcpg\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.593539 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.645821 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.658094 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.663275 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.664754 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.669392 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.718413 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.734840 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.734916 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.841048 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.841444 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.841484 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.842330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.843592 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.862087 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") pod \"nova-cell0-bbb2-account-create-update-nq54g\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.941609 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.943351 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.943458 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.944006 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:58 crc kubenswrapper[4593]: I0129 11:18:58.973846 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") pod \"nova-cell1-207d-account-create-update-n289g\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.011437 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.766806 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.766884 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.859770 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:18:59 crc kubenswrapper[4593]: W0129 11:18:59.876890 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cc0715e_34d0_4d5e_a8cc_5809adc6e264.slice/crio-ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6 WatchSource:0}: Error finding container ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6: Status 404 returned error can't find the container with id ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6 Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.975168 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:18:59 crc kubenswrapper[4593]: I0129 11:18:59.994202 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:19:00 crc kubenswrapper[4593]: I0129 11:19:00.039708 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:19:00 crc kubenswrapper[4593]: I0129 11:19:00.059481 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:19:00 crc kubenswrapper[4593]: I0129 11:19:00.101147 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:19:00 crc kubenswrapper[4593]: I0129 11:19:00.128188 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02db-account-create-update-8h7xj" event={"ID":"3cc0715e-34d0-4d5e-a8cc-5809adc6e264","Type":"ContainerStarted","Data":"ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6"} Jan 29 11:19:00 crc kubenswrapper[4593]: W0129 11:19:00.173938 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b37d23e_84cc_4059_a109_18fec66cd168.slice/crio-d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd WatchSource:0}: Error finding container d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd: Status 404 returned error can't find the container with id d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd Jan 29 11:19:00 crc kubenswrapper[4593]: W0129 11:19:00.177371 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5349ab78_1643_47e8_bfca_20d31e2f459f.slice/crio-bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc WatchSource:0}: Error finding container bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc: Status 404 returned error can't find the container with id bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc Jan 29 11:19:00 crc kubenswrapper[4593]: W0129 11:19:00.178582 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafd801e2_136a_408b_a7e6_ab9a8dcfdd3b.slice/crio-dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca WatchSource:0}: Error finding container dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca: Status 404 returned error can't find the container with id dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.120891 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:01 crc kubenswrapper[4593]: > Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.138829 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-86jg9" event={"ID":"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b","Type":"ContainerStarted","Data":"b4acf56e0984e495aea7b87f5e09b414ac2d3ef8fb7a27a8f9cffdcbe98b5b8c"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.138870 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-86jg9" event={"ID":"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b","Type":"ContainerStarted","Data":"dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.140550 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02db-account-create-update-8h7xj" event={"ID":"3cc0715e-34d0-4d5e-a8cc-5809adc6e264","Type":"ContainerStarted","Data":"6c0216f7cb045c8475f6c48e3f50c549e3404a77f63e6ee461ea5240850a1620"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.158395 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfj8w" event={"ID":"6b37d23e-84cc-4059-a109-18fec66cd168","Type":"ContainerStarted","Data":"97bad51c47183a029a20953701c3f31d5be0e445cb1a365cf05eca76d77d4eb6"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.158466 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfj8w" event={"ID":"6b37d23e-84cc-4059-a109-18fec66cd168","Type":"ContainerStarted","Data":"d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.173525 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-86jg9" podStartSLOduration=4.173502924 podStartE2EDuration="4.173502924s" podCreationTimestamp="2026-01-29 11:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.172171838 +0000 UTC m=+1207.045206029" watchObservedRunningTime="2026-01-29 11:19:01.173502924 +0000 UTC m=+1207.046537115" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.184876 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-207d-account-create-update-n289g" event={"ID":"d60bb61f-5204-4149-9922-70c6b0916c48","Type":"ContainerStarted","Data":"4617f4b77856e9af93c03f010b2af2c31551118ca1d06a956c46e256c4dacc4c"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.185121 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-207d-account-create-update-n289g" event={"ID":"d60bb61f-5204-4149-9922-70c6b0916c48","Type":"ContainerStarted","Data":"b589e21f0266150b72b75e48575c70865e45ffe8e3a984bb6e0a7d1e0ce27721"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.190970 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" event={"ID":"8c560b58-f036-4946-aca6-d59c9502954e","Type":"ContainerStarted","Data":"9d911603c45f632b1589627458c99f256ab970b9f33d34d26ebd6abdb5c39ade"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.192314 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" event={"ID":"8c560b58-f036-4946-aca6-d59c9502954e","Type":"ContainerStarted","Data":"44d5e9852fdbff2c2f57298b319bc2aac423abcdb37ecfe12370febe05fe491f"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.205667 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-vfj8w" podStartSLOduration=3.205650435 podStartE2EDuration="3.205650435s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.204147704 +0000 UTC m=+1207.077181895" watchObservedRunningTime="2026-01-29 11:19:01.205650435 +0000 UTC m=+1207.078684626" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.207107 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpcpg" event={"ID":"5349ab78-1643-47e8-bfca-20d31e2f459f","Type":"ContainerStarted","Data":"690f9e7a9c00c85e345179d71bb55173000c29b38e2987305e760408ff69f398"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.207161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpcpg" event={"ID":"5349ab78-1643-47e8-bfca-20d31e2f459f","Type":"ContainerStarted","Data":"bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc"} Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.228296 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-02db-account-create-update-8h7xj" podStartSLOduration=3.228279288 podStartE2EDuration="3.228279288s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.226562421 +0000 UTC m=+1207.099596602" watchObservedRunningTime="2026-01-29 11:19:01.228279288 +0000 UTC m=+1207.101313479" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.249416 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-207d-account-create-update-n289g" podStartSLOduration=3.2493964 podStartE2EDuration="3.2493964s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.244177208 +0000 UTC m=+1207.117211389" watchObservedRunningTime="2026-01-29 11:19:01.2493964 +0000 UTC m=+1207.122430591" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.272448 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" podStartSLOduration=3.272422923 podStartE2EDuration="3.272422923s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.265608078 +0000 UTC m=+1207.138642279" watchObservedRunningTime="2026-01-29 11:19:01.272422923 +0000 UTC m=+1207.145457124" Jan 29 11:19:01 crc kubenswrapper[4593]: I0129 11:19:01.292154 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-vpcpg" podStartSLOduration=3.292134647 podStartE2EDuration="3.292134647s" podCreationTimestamp="2026-01-29 11:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:19:01.283679258 +0000 UTC m=+1207.156713449" watchObservedRunningTime="2026-01-29 11:19:01.292134647 +0000 UTC m=+1207.165168838" Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.229020 4593 generic.go:334] "Generic (PLEG): container finished" podID="6b37d23e-84cc-4059-a109-18fec66cd168" containerID="97bad51c47183a029a20953701c3f31d5be0e445cb1a365cf05eca76d77d4eb6" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.229358 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfj8w" event={"ID":"6b37d23e-84cc-4059-a109-18fec66cd168","Type":"ContainerDied","Data":"97bad51c47183a029a20953701c3f31d5be0e445cb1a365cf05eca76d77d4eb6"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.233791 4593 generic.go:334] "Generic (PLEG): container finished" podID="d60bb61f-5204-4149-9922-70c6b0916c48" containerID="4617f4b77856e9af93c03f010b2af2c31551118ca1d06a956c46e256c4dacc4c" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.233853 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-207d-account-create-update-n289g" event={"ID":"d60bb61f-5204-4149-9922-70c6b0916c48","Type":"ContainerDied","Data":"4617f4b77856e9af93c03f010b2af2c31551118ca1d06a956c46e256c4dacc4c"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.235807 4593 generic.go:334] "Generic (PLEG): container finished" podID="8c560b58-f036-4946-aca6-d59c9502954e" containerID="9d911603c45f632b1589627458c99f256ab970b9f33d34d26ebd6abdb5c39ade" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.235857 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" event={"ID":"8c560b58-f036-4946-aca6-d59c9502954e","Type":"ContainerDied","Data":"9d911603c45f632b1589627458c99f256ab970b9f33d34d26ebd6abdb5c39ade"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.247475 4593 generic.go:334] "Generic (PLEG): container finished" podID="5349ab78-1643-47e8-bfca-20d31e2f459f" containerID="690f9e7a9c00c85e345179d71bb55173000c29b38e2987305e760408ff69f398" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.247542 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpcpg" event={"ID":"5349ab78-1643-47e8-bfca-20d31e2f459f","Type":"ContainerDied","Data":"690f9e7a9c00c85e345179d71bb55173000c29b38e2987305e760408ff69f398"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.249071 4593 generic.go:334] "Generic (PLEG): container finished" podID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" containerID="b4acf56e0984e495aea7b87f5e09b414ac2d3ef8fb7a27a8f9cffdcbe98b5b8c" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.249107 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-86jg9" event={"ID":"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b","Type":"ContainerDied","Data":"b4acf56e0984e495aea7b87f5e09b414ac2d3ef8fb7a27a8f9cffdcbe98b5b8c"} Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.256413 4593 generic.go:334] "Generic (PLEG): container finished" podID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" containerID="6c0216f7cb045c8475f6c48e3f50c549e3404a77f63e6ee461ea5240850a1620" exitCode=0 Jan 29 11:19:02 crc kubenswrapper[4593]: I0129 11:19:02.256466 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02db-account-create-update-8h7xj" event={"ID":"3cc0715e-34d0-4d5e-a8cc-5809adc6e264","Type":"ContainerDied","Data":"6c0216f7cb045c8475f6c48e3f50c549e3404a77f63e6ee461ea5240850a1620"} Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.269421 4593 generic.go:334] "Generic (PLEG): container finished" podID="37dd6241-1218-4994-9fa1-75062ec38165" containerID="b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d" exitCode=0 Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.269520 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d"} Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.270011 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"37dd6241-1218-4994-9fa1-75062ec38165","Type":"ContainerDied","Data":"d8c09b2b8b448508c118e29717b68e3a7cf488c8e6b3318a0fc967d165dd0e86"} Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.270033 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c09b2b8b448508c118e29717b68e3a7cf488c8e6b3318a0fc967d165dd0e86" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.299003 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.353745 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354079 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354162 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354262 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354429 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354574 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354784 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.354910 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") pod \"37dd6241-1218-4994-9fa1-75062ec38165\" (UID: \"37dd6241-1218-4994-9fa1-75062ec38165\") " Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.355861 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.368413 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.386925 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7" (OuterVolumeSpecName: "kube-api-access-nklq7") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "kube-api-access-nklq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.444831 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts" (OuterVolumeSpecName: "scripts") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.458245 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nklq7\" (UniqueName: \"kubernetes.io/projected/37dd6241-1218-4994-9fa1-75062ec38165-kube-api-access-nklq7\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.459512 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.459668 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/37dd6241-1218-4994-9fa1-75062ec38165-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.459816 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.521380 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.568229 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.597775 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.658838 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.672282 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.672316 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.682419 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data" (OuterVolumeSpecName: "config-data") pod "37dd6241-1218-4994-9fa1-75062ec38165" (UID: "37dd6241-1218-4994-9fa1-75062ec38165"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.777061 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37dd6241-1218-4994-9fa1-75062ec38165-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.947710 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:19:03 crc kubenswrapper[4593]: I0129 11:19:03.947791 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.024991 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.094195 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") pod \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.094515 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") pod \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\" (UID: \"3cc0715e-34d0-4d5e-a8cc-5809adc6e264\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.096845 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3cc0715e-34d0-4d5e-a8cc-5809adc6e264" (UID: "3cc0715e-34d0-4d5e-a8cc-5809adc6e264"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.147998 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458" (OuterVolumeSpecName: "kube-api-access-5w458") pod "3cc0715e-34d0-4d5e-a8cc-5809adc6e264" (UID: "3cc0715e-34d0-4d5e-a8cc-5809adc6e264"). InnerVolumeSpecName "kube-api-access-5w458". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.202362 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.202403 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w458\" (UniqueName: \"kubernetes.io/projected/3cc0715e-34d0-4d5e-a8cc-5809adc6e264-kube-api-access-5w458\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.243071 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.264252 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-86jg9" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.283620 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.292939 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpcpg" event={"ID":"5349ab78-1643-47e8-bfca-20d31e2f459f","Type":"ContainerDied","Data":"bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc"} Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.292992 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6395432e815ad482ae23d26fcefd4354ba895cd6f4b3d24eeef8500addb7bc" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.293075 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpcpg" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.294952 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02db-account-create-update-8h7xj" event={"ID":"3cc0715e-34d0-4d5e-a8cc-5809adc6e264","Type":"ContainerDied","Data":"ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6"} Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.294999 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccff10d99b931ba016c401ec01d3d8c3eb26cc68525d8b0b87a53722c22d6da6" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.295039 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02db-account-create-update-8h7xj" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.303377 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") pod \"6b37d23e-84cc-4059-a109-18fec66cd168\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.303608 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") pod \"6b37d23e-84cc-4059-a109-18fec66cd168\" (UID: \"6b37d23e-84cc-4059-a109-18fec66cd168\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.303918 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b37d23e-84cc-4059-a109-18fec66cd168" (UID: "6b37d23e-84cc-4059-a109-18fec66cd168"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.303978 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vfj8w" event={"ID":"6b37d23e-84cc-4059-a109-18fec66cd168","Type":"ContainerDied","Data":"d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd"} Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.304848 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99f9e630d45e320654c3f4cd99cdd7630a876ce3d253088b42c2dc2c0673ebd" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.304070 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vfj8w" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.305382 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b37d23e-84cc-4059-a109-18fec66cd168-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.317084 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w" (OuterVolumeSpecName: "kube-api-access-4tq6w") pod "6b37d23e-84cc-4059-a109-18fec66cd168" (UID: "6b37d23e-84cc-4059-a109-18fec66cd168"). InnerVolumeSpecName "kube-api-access-4tq6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.319888 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.323904 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-86jg9" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.324254 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-86jg9" event={"ID":"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b","Type":"ContainerDied","Data":"dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca"} Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.324339 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dabc124961373c0032688619d04b12e629b17e0138d3ed3295d1102ce1345dca" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.410496 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") pod \"5349ab78-1643-47e8-bfca-20d31e2f459f\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.410721 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") pod \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.410777 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") pod \"5349ab78-1643-47e8-bfca-20d31e2f459f\" (UID: \"5349ab78-1643-47e8-bfca-20d31e2f459f\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.410857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") pod \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\" (UID: \"afd801e2-136a-408b-a7e6-ab9a8dcfdd3b\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.412317 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tq6w\" (UniqueName: \"kubernetes.io/projected/6b37d23e-84cc-4059-a109-18fec66cd168-kube-api-access-4tq6w\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.417766 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" (UID: "afd801e2-136a-408b-a7e6-ab9a8dcfdd3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.421352 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5349ab78-1643-47e8-bfca-20d31e2f459f" (UID: "5349ab78-1643-47e8-bfca-20d31e2f459f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.423349 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl" (OuterVolumeSpecName: "kube-api-access-cdqhl") pod "5349ab78-1643-47e8-bfca-20d31e2f459f" (UID: "5349ab78-1643-47e8-bfca-20d31e2f459f"). InnerVolumeSpecName "kube-api-access-cdqhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.440957 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h" (OuterVolumeSpecName: "kube-api-access-55t5h") pod "afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" (UID: "afd801e2-136a-408b-a7e6-ab9a8dcfdd3b"). InnerVolumeSpecName "kube-api-access-55t5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.503231 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.509260 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.514051 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdqhl\" (UniqueName: \"kubernetes.io/projected/5349ab78-1643-47e8-bfca-20d31e2f459f-kube-api-access-cdqhl\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.514301 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55t5h\" (UniqueName: \"kubernetes.io/projected/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-kube-api-access-55t5h\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.514380 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5349ab78-1643-47e8-bfca-20d31e2f459f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.514482 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.540773 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.540933 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541357 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="proxy-httpd" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541374 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="proxy-httpd" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541393 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5349ab78-1643-47e8-bfca-20d31e2f459f" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541400 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5349ab78-1643-47e8-bfca-20d31e2f459f" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541413 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541420 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541439 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-notification-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541446 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-notification-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541461 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="sg-core" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541469 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="sg-core" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541481 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c560b58-f036-4946-aca6-d59c9502954e" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541487 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c560b58-f036-4946-aca6-d59c9502954e" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541501 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b37d23e-84cc-4059-a109-18fec66cd168" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541508 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b37d23e-84cc-4059-a109-18fec66cd168" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541535 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541542 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: E0129 11:19:04.541556 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-central-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541563 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-central-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541801 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-notification-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541819 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b37d23e-84cc-4059-a109-18fec66cd168" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541835 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="ceilometer-central-agent" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541848 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541858 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="sg-core" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541873 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37dd6241-1218-4994-9fa1-75062ec38165" containerName="proxy-httpd" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541882 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="5349ab78-1643-47e8-bfca-20d31e2f459f" containerName="mariadb-database-create" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541896 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.541905 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c560b58-f036-4946-aca6-d59c9502954e" containerName="mariadb-account-create-update" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.548056 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.559743 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.561339 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.561616 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.561687 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.566609 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.621439 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") pod \"8c560b58-f036-4946-aca6-d59c9502954e\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.621589 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") pod \"d60bb61f-5204-4149-9922-70c6b0916c48\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.621643 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") pod \"8c560b58-f036-4946-aca6-d59c9502954e\" (UID: \"8c560b58-f036-4946-aca6-d59c9502954e\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.621730 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") pod \"d60bb61f-5204-4149-9922-70c6b0916c48\" (UID: \"d60bb61f-5204-4149-9922-70c6b0916c48\") " Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.622655 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623022 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c560b58-f036-4946-aca6-d59c9502954e" (UID: "8c560b58-f036-4946-aca6-d59c9502954e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623131 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623473 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623621 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623851 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623903 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.623971 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.630363 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c560b58-f036-4946-aca6-d59c9502954e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.630646 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d60bb61f-5204-4149-9922-70c6b0916c48" (UID: "d60bb61f-5204-4149-9922-70c6b0916c48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.647484 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk" (OuterVolumeSpecName: "kube-api-access-8w8pk") pod "d60bb61f-5204-4149-9922-70c6b0916c48" (UID: "d60bb61f-5204-4149-9922-70c6b0916c48"). InnerVolumeSpecName "kube-api-access-8w8pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.658016 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4" (OuterVolumeSpecName: "kube-api-access-p49g4") pod "8c560b58-f036-4946-aca6-d59c9502954e" (UID: "8c560b58-f036-4946-aca6-d59c9502954e"). InnerVolumeSpecName "kube-api-access-p49g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732115 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732197 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732322 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732342 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732436 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732463 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732546 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w8pk\" (UniqueName: \"kubernetes.io/projected/d60bb61f-5204-4149-9922-70c6b0916c48-kube-api-access-8w8pk\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732560 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p49g4\" (UniqueName: \"kubernetes.io/projected/8c560b58-f036-4946-aca6-d59c9502954e-kube-api-access-p49g4\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732574 4593 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d60bb61f-5204-4149-9922-70c6b0916c48-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732593 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.732904 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.737343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.739272 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.739309 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.740455 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.742492 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.751542 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") pod \"ceilometer-0\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " pod="openstack/ceilometer-0" Jan 29 11:19:04 crc kubenswrapper[4593]: I0129 11:19:04.880080 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.097672 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37dd6241-1218-4994-9fa1-75062ec38165" path="/var/lib/kubelet/pods/37dd6241-1218-4994-9fa1-75062ec38165/volumes" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.213324 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.331670 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" event={"ID":"8c560b58-f036-4946-aca6-d59c9502954e","Type":"ContainerDied","Data":"44d5e9852fdbff2c2f57298b319bc2aac423abcdb37ecfe12370febe05fe491f"} Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.331728 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d5e9852fdbff2c2f57298b319bc2aac423abcdb37ecfe12370febe05fe491f" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.331806 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-bbb2-account-create-update-nq54g" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.334597 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-207d-account-create-update-n289g" event={"ID":"d60bb61f-5204-4149-9922-70c6b0916c48","Type":"ContainerDied","Data":"b589e21f0266150b72b75e48575c70865e45ffe8e3a984bb6e0a7d1e0ce27721"} Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.334678 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b589e21f0266150b72b75e48575c70865e45ffe8e3a984bb6e0a7d1e0ce27721" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.334738 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-207d-account-create-update-n289g" Jan 29 11:19:05 crc kubenswrapper[4593]: I0129 11:19:05.336470 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"fd0e610cbd8e4e7a281669c1ec869227753d76061275b3b46254e309d0addeb7"} Jan 29 11:19:06 crc kubenswrapper[4593]: I0129 11:19:06.348444 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3"} Jan 29 11:19:07 crc kubenswrapper[4593]: I0129 11:19:07.362093 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6"} Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.373667 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1"} Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.826874 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:19:08 crc kubenswrapper[4593]: E0129 11:19:08.827498 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60bb61f-5204-4149-9922-70c6b0916c48" containerName="mariadb-account-create-update" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.827574 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60bb61f-5204-4149-9922-70c6b0916c48" containerName="mariadb-account-create-update" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.827844 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60bb61f-5204-4149-9922-70c6b0916c48" containerName="mariadb-account-create-update" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.828545 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.830431 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.831874 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.832372 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dv5z9" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.852871 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.915823 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.915912 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.915954 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:08 crc kubenswrapper[4593]: I0129 11:19:08.915995 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.017462 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.017557 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.017589 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.017712 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.026016 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.026200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.026827 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.043766 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") pod \"nova-cell0-conductor-db-sync-vkj44\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.146500 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:09 crc kubenswrapper[4593]: I0129 11:19:09.714029 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:19:10 crc kubenswrapper[4593]: I0129 11:19:10.422949 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vkj44" event={"ID":"9a120fd3-e300-459e-9c9b-dd0f3da25621","Type":"ContainerStarted","Data":"5657eeacbcf8694db60da42cd98750e99517877fa702ba31f32e45b7a57b37a1"} Jan 29 11:19:11 crc kubenswrapper[4593]: I0129 11:19:11.109133 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:11 crc kubenswrapper[4593]: > Jan 29 11:19:11 crc kubenswrapper[4593]: I0129 11:19:11.442231 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerStarted","Data":"e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28"} Jan 29 11:19:11 crc kubenswrapper[4593]: I0129 11:19:11.442502 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:19:11 crc kubenswrapper[4593]: I0129 11:19:11.475893 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.454223861 podStartE2EDuration="7.475864831s" podCreationTimestamp="2026-01-29 11:19:04 +0000 UTC" firstStartedPulling="2026-01-29 11:19:05.230938383 +0000 UTC m=+1211.103972574" lastFinishedPulling="2026-01-29 11:19:10.252579353 +0000 UTC m=+1216.125613544" observedRunningTime="2026-01-29 11:19:11.469509149 +0000 UTC m=+1217.342543340" watchObservedRunningTime="2026-01-29 11:19:11.475864831 +0000 UTC m=+1217.348899022" Jan 29 11:19:21 crc kubenswrapper[4593]: I0129 11:19:21.106536 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:21 crc kubenswrapper[4593]: > Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.839344 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.840747 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-central-agent" containerID="cri-o://baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3" gracePeriod=30 Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.840795 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="sg-core" containerID="cri-o://468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1" gracePeriod=30 Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.840822 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-notification-agent" containerID="cri-o://1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6" gracePeriod=30 Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.840805 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" containerID="cri-o://e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28" gracePeriod=30 Jan 29 11:19:22 crc kubenswrapper[4593]: I0129 11:19:22.865375 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.184:3000/\": EOF" Jan 29 11:19:24 crc kubenswrapper[4593]: I0129 11:19:24.581008 4593 generic.go:334] "Generic (PLEG): container finished" podID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerID="e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28" exitCode=0 Jan 29 11:19:24 crc kubenswrapper[4593]: I0129 11:19:24.581395 4593 generic.go:334] "Generic (PLEG): container finished" podID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerID="468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1" exitCode=2 Jan 29 11:19:24 crc kubenswrapper[4593]: I0129 11:19:24.581243 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28"} Jan 29 11:19:24 crc kubenswrapper[4593]: I0129 11:19:24.581444 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1"} Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.596795 4593 generic.go:334] "Generic (PLEG): container finished" podID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerID="d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af" exitCode=137 Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.596927 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af"} Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.597794 4593 scope.go:117] "RemoveContainer" containerID="a15a1a862b6057b76f95edeb2bb41d937e5e017b829f9f7c6c63b71068d74996" Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.602881 4593 generic.go:334] "Generic (PLEG): container finished" podID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerID="b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972" exitCode=137 Jan 29 11:19:25 crc kubenswrapper[4593]: I0129 11:19:25.602956 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerDied","Data":"b268f526e5a04b5381dd6c521b7785de6e18d74e1d8c1ba48d2b1ab6cb3e4972"} Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.641787 4593 scope.go:117] "RemoveContainer" containerID="948ff5eda4c7a4e3a5023888e59c0f30a788f7ad09bc8aba86ab19e010a4eeb1" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.676065 4593 generic.go:334] "Generic (PLEG): container finished" podID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerID="1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6" exitCode=0 Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.676103 4593 generic.go:334] "Generic (PLEG): container finished" podID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerID="baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3" exitCode=0 Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.676127 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6"} Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.676158 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3"} Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.808450 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918346 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918523 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918548 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918594 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918691 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918724 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.918785 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") pod \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\" (UID: \"65b9b146-d0fa-4da2-8d0a-a6896f57895b\") " Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.929557 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.929927 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.934869 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd" (OuterVolumeSpecName: "kube-api-access-t8pfd") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "kube-api-access-t8pfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.947006 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts" (OuterVolumeSpecName: "scripts") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:27 crc kubenswrapper[4593]: I0129 11:19:27.992703 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.021994 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.022028 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.022040 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.022055 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65b9b146-d0fa-4da2-8d0a-a6896f57895b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.022065 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8pfd\" (UniqueName: \"kubernetes.io/projected/65b9b146-d0fa-4da2-8d0a-a6896f57895b-kube-api-access-t8pfd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.033789 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.034291 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.098696 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data" (OuterVolumeSpecName: "config-data") pod "65b9b146-d0fa-4da2-8d0a-a6896f57895b" (UID: "65b9b146-d0fa-4da2-8d0a-a6896f57895b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.123450 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.123484 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.123496 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b9b146-d0fa-4da2-8d0a-a6896f57895b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.704825 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65b9b146-d0fa-4da2-8d0a-a6896f57895b","Type":"ContainerDied","Data":"fd0e610cbd8e4e7a281669c1ec869227753d76061275b3b46254e309d0addeb7"} Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.705213 4593 scope.go:117] "RemoveContainer" containerID="e8a7bd9f797876139eb4c0c8b43df0d5093bd51585dcc4c1e1a31c81b63ced28" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.705378 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.719786 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vkj44" event={"ID":"9a120fd3-e300-459e-9c9b-dd0f3da25621","Type":"ContainerStarted","Data":"81d2ae81ac7fd09960ec8dcecfdd7fb40c2612e8262393b7c2c13c07e2588b6b"} Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.730385 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerStarted","Data":"3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909"} Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.748044 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bdffb4784-5zp8q" event={"ID":"be4a01cd-2eb7-48e8-8a7e-eb02f8851188","Type":"ContainerStarted","Data":"adc17d8c83f12504baffeb49cb0d2af04cf61eab5f1267756b9ff12b2edb5285"} Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.754272 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vkj44" podStartSLOduration=2.704186543 podStartE2EDuration="20.754252774s" podCreationTimestamp="2026-01-29 11:19:08 +0000 UTC" firstStartedPulling="2026-01-29 11:19:09.726827045 +0000 UTC m=+1215.599861236" lastFinishedPulling="2026-01-29 11:19:27.776893276 +0000 UTC m=+1233.649927467" observedRunningTime="2026-01-29 11:19:28.741370135 +0000 UTC m=+1234.614404336" watchObservedRunningTime="2026-01-29 11:19:28.754252774 +0000 UTC m=+1234.627286965" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.782237 4593 scope.go:117] "RemoveContainer" containerID="468612e35ac127650687828d94e869098ca3d6a5052cb337e01393ae58067cd1" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.824690 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.831784 4593 scope.go:117] "RemoveContainer" containerID="1dcf72accedd5617ce4ca3dcfdfdaf51830248482923c5198609e5deb5c5b3a6" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.850378 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.876946 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65b9b146_d0fa_4da2_8d0a_a6896f57895b.slice/crio-fd0e610cbd8e4e7a281669c1ec869227753d76061275b3b46254e309d0addeb7\": RecentStats: unable to find data in memory cache]" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.878445 4593 scope.go:117] "RemoveContainer" containerID="baa1893081a9ffd09dbe982049f564bb11e4a6d94432ca7316021323ac31f6b3" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.886409 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.886978 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-notification-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887004 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-notification-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.887027 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-central-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887035 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-central-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.887053 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887061 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" Jan 29 11:19:28 crc kubenswrapper[4593]: E0129 11:19:28.887103 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="sg-core" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887112 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="sg-core" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887619 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="sg-core" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887674 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-notification-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887689 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="ceilometer-central-agent" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.887700 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" containerName="proxy-httpd" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.890507 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.894884 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.895258 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.898016 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:19:28 crc kubenswrapper[4593]: I0129 11:19:28.926279 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040661 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040751 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040827 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040876 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.040987 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.041040 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.041094 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.086555 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b9b146-d0fa-4da2-8d0a-a6896f57895b" path="/var/lib/kubelet/pods/65b9b146-d0fa-4da2-8d0a-a6896f57895b/volumes" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142442 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142761 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142853 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142893 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142924 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.142971 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.143021 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.144339 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.144449 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.152022 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.152096 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.166357 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.167977 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.173746 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.178522 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") pod \"ceilometer-0\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.221990 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:29 crc kubenswrapper[4593]: W0129 11:19:29.727718 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf1f4c00_33e4_4464_8ce0_c188cd6c2098.slice/crio-b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce WatchSource:0}: Error finding container b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce: Status 404 returned error can't find the container with id b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.733828 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:29 crc kubenswrapper[4593]: I0129 11:19:29.759920 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce"} Jan 29 11:19:30 crc kubenswrapper[4593]: I0129 11:19:30.776095 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83"} Jan 29 11:19:31 crc kubenswrapper[4593]: I0129 11:19:31.102885 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:31 crc kubenswrapper[4593]: > Jan 29 11:19:31 crc kubenswrapper[4593]: I0129 11:19:31.789781 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98"} Jan 29 11:19:33 crc kubenswrapper[4593]: I0129 11:19:33.808309 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4"} Jan 29 11:19:33 crc kubenswrapper[4593]: I0129 11:19:33.946771 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:19:33 crc kubenswrapper[4593]: I0129 11:19:33.946825 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:19:34 crc kubenswrapper[4593]: I0129 11:19:34.909783 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:19:34 crc kubenswrapper[4593]: I0129 11:19:34.910120 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:19:35 crc kubenswrapper[4593]: I0129 11:19:35.049754 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:19:35 crc kubenswrapper[4593]: I0129 11:19:35.049818 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:19:36 crc kubenswrapper[4593]: I0129 11:19:36.712395 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:36 crc kubenswrapper[4593]: I0129 11:19:36.850414 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerStarted","Data":"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b"} Jan 29 11:19:36 crc kubenswrapper[4593]: I0129 11:19:36.850675 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:19:36 crc kubenswrapper[4593]: I0129 11:19:36.881773 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.656291713 podStartE2EDuration="8.881750574s" podCreationTimestamp="2026-01-29 11:19:28 +0000 UTC" firstStartedPulling="2026-01-29 11:19:29.729539835 +0000 UTC m=+1235.602574026" lastFinishedPulling="2026-01-29 11:19:35.954998686 +0000 UTC m=+1241.828032887" observedRunningTime="2026-01-29 11:19:36.878935227 +0000 UTC m=+1242.751969418" watchObservedRunningTime="2026-01-29 11:19:36.881750574 +0000 UTC m=+1242.754784765" Jan 29 11:19:37 crc kubenswrapper[4593]: I0129 11:19:37.859227 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-central-agent" containerID="cri-o://1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" gracePeriod=30 Jan 29 11:19:37 crc kubenswrapper[4593]: I0129 11:19:37.859414 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="sg-core" containerID="cri-o://884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" gracePeriod=30 Jan 29 11:19:37 crc kubenswrapper[4593]: I0129 11:19:37.859441 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-notification-agent" containerID="cri-o://50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" gracePeriod=30 Jan 29 11:19:37 crc kubenswrapper[4593]: I0129 11:19:37.859514 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="proxy-httpd" containerID="cri-o://c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" gracePeriod=30 Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.876710 4593 generic.go:334] "Generic (PLEG): container finished" podID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerID="c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" exitCode=0 Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.877116 4593 generic.go:334] "Generic (PLEG): container finished" podID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerID="884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" exitCode=2 Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.877129 4593 generic.go:334] "Generic (PLEG): container finished" podID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerID="50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" exitCode=0 Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.876919 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b"} Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.877165 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4"} Jan 29 11:19:38 crc kubenswrapper[4593]: I0129 11:19:38.877189 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98"} Jan 29 11:19:41 crc kubenswrapper[4593]: I0129 11:19:41.102008 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:19:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:19:41 crc kubenswrapper[4593]: > Jan 29 11:19:41 crc kubenswrapper[4593]: I0129 11:19:41.102551 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:19:41 crc kubenswrapper[4593]: I0129 11:19:41.103313 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95"} pod="openshift-marketplace/redhat-operators-k4l8n" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 29 11:19:41 crc kubenswrapper[4593]: I0129 11:19:41.103344 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" containerID="cri-o://01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" gracePeriod=30 Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.668312 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.790717 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.790820 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.790943 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.790969 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791027 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791068 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791204 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791306 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") pod \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\" (UID: \"df1f4c00-33e4-4464-8ce0-c188cd6c2098\") " Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791319 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.791510 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.792111 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.792128 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/df1f4c00-33e4-4464-8ce0-c188cd6c2098-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.809600 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8" (OuterVolumeSpecName: "kube-api-access-ppkf8") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "kube-api-access-ppkf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.812285 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts" (OuterVolumeSpecName: "scripts") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.827906 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.877976 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.894574 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.894603 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.894616 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppkf8\" (UniqueName: \"kubernetes.io/projected/df1f4c00-33e4-4464-8ce0-c188cd6c2098-kube-api-access-ppkf8\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.894624 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.896485 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.909315 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data" (OuterVolumeSpecName: "config-data") pod "df1f4c00-33e4-4464-8ce0-c188cd6c2098" (UID: "df1f4c00-33e4-4464-8ce0-c188cd6c2098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921854 4593 generic.go:334] "Generic (PLEG): container finished" podID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerID="1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" exitCode=0 Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921904 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83"} Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921949 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921975 4593 scope.go:117] "RemoveContainer" containerID="c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.921958 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"df1f4c00-33e4-4464-8ce0-c188cd6c2098","Type":"ContainerDied","Data":"b893608d9e63ce09c17a2cd3bafb65d7a0e42cb80d9169775a8751adace0b1ce"} Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.955843 4593 scope.go:117] "RemoveContainer" containerID="884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.979233 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.990893 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.996316 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:43 crc kubenswrapper[4593]: I0129 11:19:43.996353 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df1f4c00-33e4-4464-8ce0-c188cd6c2098-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.000791 4593 scope.go:117] "RemoveContainer" containerID="50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004235 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.004568 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-notification-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004586 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-notification-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.004604 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="proxy-httpd" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004611 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="proxy-httpd" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.004733 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="sg-core" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004742 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="sg-core" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.004765 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-central-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004771 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-central-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004947 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="sg-core" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004958 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="proxy-httpd" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004969 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-central-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.004994 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" containerName="ceilometer-notification-agent" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.006573 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.012288 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.012570 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.012584 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.024707 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.066264 4593 scope.go:117] "RemoveContainer" containerID="1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.092826 4593 scope.go:117] "RemoveContainer" containerID="c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.094001 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b\": container with ID starting with c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b not found: ID does not exist" containerID="c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094036 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b"} err="failed to get container status \"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b\": rpc error: code = NotFound desc = could not find container \"c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b\": container with ID starting with c04ac5e6acf941bc843033b7be031ca946b9bac6616b3eb6fadf35410a4a4a6b not found: ID does not exist" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094064 4593 scope.go:117] "RemoveContainer" containerID="884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.094390 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4\": container with ID starting with 884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4 not found: ID does not exist" containerID="884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094410 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4"} err="failed to get container status \"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4\": rpc error: code = NotFound desc = could not find container \"884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4\": container with ID starting with 884331dbae2ce84a16d83ee26f74728e15b739dbd7e33170f3a8bcb1427b10d4 not found: ID does not exist" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094423 4593 scope.go:117] "RemoveContainer" containerID="50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.094836 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98\": container with ID starting with 50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98 not found: ID does not exist" containerID="50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094859 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98"} err="failed to get container status \"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98\": rpc error: code = NotFound desc = could not find container \"50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98\": container with ID starting with 50bb98a7a38e179861ea0aae439e5ea1f8482ffc3a50d97a0cb8efb1c4a7ef98 not found: ID does not exist" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.094874 4593 scope.go:117] "RemoveContainer" containerID="1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" Jan 29 11:19:44 crc kubenswrapper[4593]: E0129 11:19:44.095269 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83\": container with ID starting with 1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83 not found: ID does not exist" containerID="1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.095288 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83"} err="failed to get container status \"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83\": rpc error: code = NotFound desc = could not find container \"1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83\": container with ID starting with 1a27e6d61d6cc1b63ae220bc5e31e7cc48cdb73bb715859c62437115ad55ae83 not found: ID does not exist" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.200990 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201065 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201115 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201174 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201214 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201259 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201290 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.201362 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303383 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303795 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303856 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303893 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.303928 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.304020 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.304062 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.304102 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.304771 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.308968 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.310758 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.311008 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.313268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.388179 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.439763 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.441507 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.646898 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:19:44 crc kubenswrapper[4593]: I0129 11:19:44.912453 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:19:45 crc kubenswrapper[4593]: I0129 11:19:45.050748 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:19:45 crc kubenswrapper[4593]: I0129 11:19:45.092383 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1f4c00-33e4-4464-8ce0-c188cd6c2098" path="/var/lib/kubelet/pods/df1f4c00-33e4-4464-8ce0-c188cd6c2098/volumes" Jan 29 11:19:45 crc kubenswrapper[4593]: I0129 11:19:45.189106 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:19:45 crc kubenswrapper[4593]: W0129 11:19:45.193564 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod934ccdca_f1e6_43d2_af69_2efb205bf387.slice/crio-4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763 WatchSource:0}: Error finding container 4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763: Status 404 returned error can't find the container with id 4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763 Jan 29 11:19:46 crc kubenswrapper[4593]: I0129 11:19:46.481522 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763"} Jan 29 11:19:47 crc kubenswrapper[4593]: I0129 11:19:47.500900 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050"} Jan 29 11:19:48 crc kubenswrapper[4593]: I0129 11:19:48.529064 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b"} Jan 29 11:19:48 crc kubenswrapper[4593]: I0129 11:19:48.529434 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f"} Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.571286 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerStarted","Data":"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9"} Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.573118 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.586267 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k4l8n_9194cbfb-27b9-47e8-90eb-64b9391d0b07/registry-server/0.log" Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.595581 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" exitCode=0 Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.595996 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95"} Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.596130 4593 scope.go:117] "RemoveContainer" containerID="392c83c8b20810b83ec9a5ece7d4422790dc84f02f822abe01aa473a1c9a74d9" Jan 29 11:19:51 crc kubenswrapper[4593]: I0129 11:19:51.607296 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.060269705 podStartE2EDuration="8.607277133s" podCreationTimestamp="2026-01-29 11:19:43 +0000 UTC" firstStartedPulling="2026-01-29 11:19:45.196239467 +0000 UTC m=+1251.069273658" lastFinishedPulling="2026-01-29 11:19:50.743246895 +0000 UTC m=+1256.616281086" observedRunningTime="2026-01-29 11:19:51.592338468 +0000 UTC m=+1257.465372659" watchObservedRunningTime="2026-01-29 11:19:51.607277133 +0000 UTC m=+1257.480311324" Jan 29 11:19:52 crc kubenswrapper[4593]: I0129 11:19:52.615716 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerStarted","Data":"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2"} Jan 29 11:19:54 crc kubenswrapper[4593]: I0129 11:19:54.910296 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:19:55 crc kubenswrapper[4593]: I0129 11:19:55.050043 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bdffb4784-5zp8q" podUID="be4a01cd-2eb7-48e8-8a7e-eb02f8851188" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 29 11:19:56 crc kubenswrapper[4593]: I0129 11:19:56.668007 4593 generic.go:334] "Generic (PLEG): container finished" podID="9a120fd3-e300-459e-9c9b-dd0f3da25621" containerID="81d2ae81ac7fd09960ec8dcecfdd7fb40c2612e8262393b7c2c13c07e2588b6b" exitCode=0 Jan 29 11:19:56 crc kubenswrapper[4593]: I0129 11:19:56.669139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vkj44" event={"ID":"9a120fd3-e300-459e-9c9b-dd0f3da25621","Type":"ContainerDied","Data":"81d2ae81ac7fd09960ec8dcecfdd7fb40c2612e8262393b7c2c13c07e2588b6b"} Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.092905 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.098211 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") pod \"9a120fd3-e300-459e-9c9b-dd0f3da25621\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.098261 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") pod \"9a120fd3-e300-459e-9c9b-dd0f3da25621\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.098326 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") pod \"9a120fd3-e300-459e-9c9b-dd0f3da25621\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.098452 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") pod \"9a120fd3-e300-459e-9c9b-dd0f3da25621\" (UID: \"9a120fd3-e300-459e-9c9b-dd0f3da25621\") " Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.107878 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts" (OuterVolumeSpecName: "scripts") pod "9a120fd3-e300-459e-9c9b-dd0f3da25621" (UID: "9a120fd3-e300-459e-9c9b-dd0f3da25621"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.117890 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7" (OuterVolumeSpecName: "kube-api-access-dxmg7") pod "9a120fd3-e300-459e-9c9b-dd0f3da25621" (UID: "9a120fd3-e300-459e-9c9b-dd0f3da25621"). InnerVolumeSpecName "kube-api-access-dxmg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.164669 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data" (OuterVolumeSpecName: "config-data") pod "9a120fd3-e300-459e-9c9b-dd0f3da25621" (UID: "9a120fd3-e300-459e-9c9b-dd0f3da25621"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.197570 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a120fd3-e300-459e-9c9b-dd0f3da25621" (UID: "9a120fd3-e300-459e-9c9b-dd0f3da25621"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.201288 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.201324 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.201340 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxmg7\" (UniqueName: \"kubernetes.io/projected/9a120fd3-e300-459e-9c9b-dd0f3da25621-kube-api-access-dxmg7\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.201352 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a120fd3-e300-459e-9c9b-dd0f3da25621-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.710364 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vkj44" event={"ID":"9a120fd3-e300-459e-9c9b-dd0f3da25621","Type":"ContainerDied","Data":"5657eeacbcf8694db60da42cd98750e99517877fa702ba31f32e45b7a57b37a1"} Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.710791 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5657eeacbcf8694db60da42cd98750e99517877fa702ba31f32e45b7a57b37a1" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.710588 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vkj44" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.985798 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:19:58 crc kubenswrapper[4593]: E0129 11:19:58.986319 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a120fd3-e300-459e-9c9b-dd0f3da25621" containerName="nova-cell0-conductor-db-sync" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.986344 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a120fd3-e300-459e-9c9b-dd0f3da25621" containerName="nova-cell0-conductor-db-sync" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.986581 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a120fd3-e300-459e-9c9b-dd0f3da25621" containerName="nova-cell0-conductor-db-sync" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.987431 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.990188 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dv5z9" Jan 29 11:19:58 crc kubenswrapper[4593]: I0129 11:19:58.990394 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.003458 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.016469 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.016558 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkr8l\" (UniqueName: \"kubernetes.io/projected/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-kube-api-access-wkr8l\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.016796 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.120775 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.120838 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkr8l\" (UniqueName: \"kubernetes.io/projected/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-kube-api-access-wkr8l\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.120909 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.154497 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkr8l\" (UniqueName: \"kubernetes.io/projected/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-kube-api-access-wkr8l\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.154578 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.155393 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f\") " pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.329957 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 29 11:19:59 crc kubenswrapper[4593]: I0129 11:19:59.858397 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 29 11:19:59 crc kubenswrapper[4593]: W0129 11:19:59.859622 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb50238c6_e2ee_4e0b_a9c9_ded7ee100c6f.slice/crio-25d4cf3eba23d9e685a33c5df8dec551aaf6e33f44d956555b74db089039ef5e WatchSource:0}: Error finding container 25d4cf3eba23d9e685a33c5df8dec551aaf6e33f44d956555b74db089039ef5e: Status 404 returned error can't find the container with id 25d4cf3eba23d9e685a33c5df8dec551aaf6e33f44d956555b74db089039ef5e Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.053366 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.053603 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.867785 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f","Type":"ContainerStarted","Data":"6f005e0f24fa46ef5dd9f95d49e1d95dfec214ed45107732d9cd041a3d060478"} Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.868971 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.869113 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f","Type":"ContainerStarted","Data":"25d4cf3eba23d9e685a33c5df8dec551aaf6e33f44d956555b74db089039ef5e"} Jan 29 11:20:00 crc kubenswrapper[4593]: I0129 11:20:00.893333 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.893310694 podStartE2EDuration="2.893310694s" podCreationTimestamp="2026-01-29 11:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:00.886622003 +0000 UTC m=+1266.759656194" watchObservedRunningTime="2026-01-29 11:20:00.893310694 +0000 UTC m=+1266.766344885" Jan 29 11:20:01 crc kubenswrapper[4593]: I0129 11:20:01.117617 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:01 crc kubenswrapper[4593]: > Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.947088 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.947883 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.947976 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.949228 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:20:03 crc kubenswrapper[4593]: I0129 11:20:03.949358 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002" gracePeriod=600 Jan 29 11:20:04 crc kubenswrapper[4593]: I0129 11:20:04.910162 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002" exitCode=0 Jan 29 11:20:04 crc kubenswrapper[4593]: I0129 11:20:04.910257 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002"} Jan 29 11:20:04 crc kubenswrapper[4593]: I0129 11:20:04.910897 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa"} Jan 29 11:20:04 crc kubenswrapper[4593]: I0129 11:20:04.910991 4593 scope.go:117] "RemoveContainer" containerID="8d1f98c41c3fc4853c4e68bc7e91b4d8483a47efb5351d8fdb5ff5ec5ce9a38d" Jan 29 11:20:09 crc kubenswrapper[4593]: I0129 11:20:09.359712 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 29 11:20:09 crc kubenswrapper[4593]: I0129 11:20:09.459484 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:20:09 crc kubenswrapper[4593]: I0129 11:20:09.513840 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.273596 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.275364 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.277829 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.277928 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.292234 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.448053 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.448437 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.448469 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.449253 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.501152 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.502891 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.512336 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.530880 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553296 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553348 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553388 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553405 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553438 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553470 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.553512 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.554722 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.562422 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.573538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.574932 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.615356 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") pod \"nova-cell0-cell-mapping-jfk6z\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.634096 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.635444 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.639965 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.656859 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.656935 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.656959 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.657006 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.657064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.657088 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.657114 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.660990 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.681413 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.683503 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.683952 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.740278 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") pod \"nova-api-0\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.808939 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.809159 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.809254 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.828597 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.830738 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.836431 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.898372 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.899779 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") pod \"nova-scheduler-0\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.981135 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:10 crc kubenswrapper[4593]: I0129 11:20:10.986317 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.044990 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.045043 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.045108 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.045216 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.049529 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.055744 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.125871 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.134685 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.135167 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.146454 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.150612 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.181072 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.181961 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.170143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.151149 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:11 crc kubenswrapper[4593]: > Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.182422 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.182608 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.174591 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.213127 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.214786 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.214974 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.232495 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") pod \"nova-metadata-0\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.278272 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288050 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288336 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288402 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288530 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288560 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288877 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.288944 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.289023 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.289048 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.368840 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.411120 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.426137 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.426320 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.426719 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.426770 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.427004 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.427039 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.427102 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.427126 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.428476 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.429474 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.430066 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.430762 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.431328 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.440292 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") pod \"dnsmasq-dns-bccf8f775-bsx9x\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.442361 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.451330 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.486659 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") pod \"nova-cell1-novncproxy-0\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:11 crc kubenswrapper[4593]: I0129 11:20:11.576815 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.010179 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.089819 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:12 crc kubenswrapper[4593]: W0129 11:20:12.150745 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd09a34f_e8e0_45ab_8106_550772be304d.slice/crio-d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9 WatchSource:0}: Error finding container d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9: Status 404 returned error can't find the container with id d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9 Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.305274 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.607299 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.686508 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:12 crc kubenswrapper[4593]: W0129 11:20:12.747212 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ea9a9cf_fb59_4fec_a11c_3a228320cf32.slice/crio-13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587 WatchSource:0}: Error finding container 13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587: Status 404 returned error can't find the container with id 13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587 Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.780978 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:20:12 crc kubenswrapper[4593]: I0129 11:20:12.906953 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.068023 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.069404 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.077365 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.077536 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.098524 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.135068 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerStarted","Data":"13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.137043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerStarted","Data":"7eb448007e7f2f259e7551ed6226b778b13ff57e3f9a0c2ec212e1fb5e5be79a"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.138311 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8","Type":"ContainerStarted","Data":"afca7bf4b299e69d695725ee22c529f3ea659c864ce859245236b6ced858cb90"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.139280 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerStarted","Data":"d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.140179 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54be0c9a-2dea-467c-afa6-230000d9ccfa","Type":"ContainerStarted","Data":"7a4e7135bde371deba18f2e2d879e899cf14dcee993b634bcfe74d5b004e721e"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.141806 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jfk6z" event={"ID":"ecc4cd76-a47d-4691-906f-d1617455f100","Type":"ContainerStarted","Data":"96bdd94d7fe01d27f9002652fb0e024d5e4216b747eecd5f1013e14f7c20a7f7"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.141850 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jfk6z" event={"ID":"ecc4cd76-a47d-4691-906f-d1617455f100","Type":"ContainerStarted","Data":"40b85745aaf0431c0c3b188b6e870f9ab2cee2968144160c13e9e9930341c6fc"} Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.158877 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.158946 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.159168 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.159668 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.167428 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jfk6z" podStartSLOduration=3.1674012400000002 podStartE2EDuration="3.16740124s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:13.159695121 +0000 UTC m=+1279.032729322" watchObservedRunningTime="2026-01-29 11:20:13.16740124 +0000 UTC m=+1279.040435431" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.264400 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.264611 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.264745 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.264827 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.269962 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.270592 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.274191 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.288344 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") pod \"nova-cell1-conductor-db-sync-wc9fh\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:13 crc kubenswrapper[4593]: I0129 11:20:13.392348 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.200858 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.232058 4593 generic.go:334] "Generic (PLEG): container finished" podID="697e4dbe-9b00-4891-9456-f76cb9642401" containerID="7393be6f52eedddb8f2e44100a437ddd9c4a6aceb5605fe268b7dc5e484c61b6" exitCode=0 Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.233880 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerDied","Data":"7393be6f52eedddb8f2e44100a437ddd9c4a6aceb5605fe268b7dc5e484c61b6"} Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.247285 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5bdffb4784-5zp8q" Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.448475 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.449654 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon-log" containerID="cri-o://79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8" gracePeriod=30 Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.450047 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" containerID="cri-o://3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909" gracePeriod=30 Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.479803 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:14 crc kubenswrapper[4593]: I0129 11:20:14.693011 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.289887 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" event={"ID":"c4d30b0b-741b-4275-bcd3-65f27a294d54","Type":"ContainerStarted","Data":"becc277c4dab17e63d11203d4fe1da3af35724523a182bc72abe031b3a628c8a"} Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.290233 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" event={"ID":"c4d30b0b-741b-4275-bcd3-65f27a294d54","Type":"ContainerStarted","Data":"8dc46203d3c6c5d1cde15f072717e4362e4df9ca33b0077c8bfb3bc44346b805"} Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.379016 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerStarted","Data":"5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb"} Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.379930 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" podStartSLOduration=2.379911927 podStartE2EDuration="2.379911927s" podCreationTimestamp="2026-01-29 11:20:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:15.314156396 +0000 UTC m=+1281.187190597" watchObservedRunningTime="2026-01-29 11:20:15.379911927 +0000 UTC m=+1281.252946118" Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.380061 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:15 crc kubenswrapper[4593]: I0129 11:20:15.410965 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" podStartSLOduration=4.410933827 podStartE2EDuration="4.410933827s" podCreationTimestamp="2026-01-29 11:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:15.406409604 +0000 UTC m=+1281.279443795" watchObservedRunningTime="2026-01-29 11:20:15.410933827 +0000 UTC m=+1281.283968028" Jan 29 11:20:16 crc kubenswrapper[4593]: I0129 11:20:16.235777 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:16 crc kubenswrapper[4593]: I0129 11:20:16.255921 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:19 crc kubenswrapper[4593]: I0129 11:20:19.445194 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:43798->10.217.0.146:8443: read: connection reset by peer" Jan 29 11:20:19 crc kubenswrapper[4593]: I0129 11:20:19.446497 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:20:20 crc kubenswrapper[4593]: I0129 11:20:20.433358 4593 generic.go:334] "Generic (PLEG): container finished" podID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerID="3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909" exitCode=0 Jan 29 11:20:20 crc kubenswrapper[4593]: I0129 11:20:20.433442 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909"} Jan 29 11:20:20 crc kubenswrapper[4593]: I0129 11:20:20.433996 4593 scope.go:117] "RemoveContainer" containerID="d530af95b0eed70c00fd912ebcf7a37fa3a57fbb18ac1239a4c7320a7f27c6af" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.151560 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:21 crc kubenswrapper[4593]: > Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.444354 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54be0c9a-2dea-467c-afa6-230000d9ccfa","Type":"ContainerStarted","Data":"660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.451733 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerStarted","Data":"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.451789 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerStarted","Data":"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.451893 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-log" containerID="cri-o://958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" gracePeriod=30 Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.452004 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-metadata" containerID="cri-o://453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" gracePeriod=30 Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.458272 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" gracePeriod=30 Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.458386 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8","Type":"ContainerStarted","Data":"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.463572 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerStarted","Data":"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.463649 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerStarted","Data":"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594"} Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.473172 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.09940852 podStartE2EDuration="11.473150526s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="2026-01-29 11:20:12.63858893 +0000 UTC m=+1278.511623121" lastFinishedPulling="2026-01-29 11:20:20.012330936 +0000 UTC m=+1285.885365127" observedRunningTime="2026-01-29 11:20:21.472606762 +0000 UTC m=+1287.345640953" watchObservedRunningTime="2026-01-29 11:20:21.473150526 +0000 UTC m=+1287.346184717" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.512216 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.620172 podStartE2EDuration="11.512191153s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="2026-01-29 11:20:12.171418568 +0000 UTC m=+1278.044452759" lastFinishedPulling="2026-01-29 11:20:20.063437721 +0000 UTC m=+1285.936471912" observedRunningTime="2026-01-29 11:20:21.493352114 +0000 UTC m=+1287.366386305" watchObservedRunningTime="2026-01-29 11:20:21.512191153 +0000 UTC m=+1287.385225344" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.524273 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.360664165 podStartE2EDuration="11.5242501s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="2026-01-29 11:20:12.930212577 +0000 UTC m=+1278.803246768" lastFinishedPulling="2026-01-29 11:20:20.093798512 +0000 UTC m=+1285.966832703" observedRunningTime="2026-01-29 11:20:21.52127541 +0000 UTC m=+1287.394309621" watchObservedRunningTime="2026-01-29 11:20:21.5242501 +0000 UTC m=+1287.397284291" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.548283 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.243103851 podStartE2EDuration="11.54825071s" podCreationTimestamp="2026-01-29 11:20:10 +0000 UTC" firstStartedPulling="2026-01-29 11:20:12.754818827 +0000 UTC m=+1278.627853018" lastFinishedPulling="2026-01-29 11:20:20.059965686 +0000 UTC m=+1285.932999877" observedRunningTime="2026-01-29 11:20:21.53679485 +0000 UTC m=+1287.409829041" watchObservedRunningTime="2026-01-29 11:20:21.54825071 +0000 UTC m=+1287.421284911" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.578826 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.689530 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:20:21 crc kubenswrapper[4593]: I0129 11:20:21.690217 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="dnsmasq-dns" containerID="cri-o://71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059" gracePeriod=10 Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.012317 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.446086 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.506395 4593 generic.go:334] "Generic (PLEG): container finished" podID="7aadd015-f714-41cf-b532-396d9f5f3946" containerID="71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059" exitCode=0 Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.506761 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerDied","Data":"71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059"} Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.508883 4593 generic.go:334] "Generic (PLEG): container finished" podID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" exitCode=0 Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.508929 4593 generic.go:334] "Generic (PLEG): container finished" podID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" exitCode=143 Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.510538 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.511614 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerDied","Data":"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab"} Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.511686 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerDied","Data":"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325"} Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.511706 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ea9a9cf-fb59-4fec-a11c-3a228320cf32","Type":"ContainerDied","Data":"13727362b708c7d8f4bdedf7112159bac510e7dd8fcbc27ff1f8ffc6f3f09587"} Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.511726 4593 scope.go:117] "RemoveContainer" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.567159 4593 scope.go:117] "RemoveContainer" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.631576 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") pod \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.631619 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") pod \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.631649 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") pod \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.631693 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") pod \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\" (UID: \"8ea9a9cf-fb59-4fec-a11c-3a228320cf32\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.636207 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs" (OuterVolumeSpecName: "logs") pod "8ea9a9cf-fb59-4fec-a11c-3a228320cf32" (UID: "8ea9a9cf-fb59-4fec-a11c-3a228320cf32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.653354 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8" (OuterVolumeSpecName: "kube-api-access-78sj8") pod "8ea9a9cf-fb59-4fec-a11c-3a228320cf32" (UID: "8ea9a9cf-fb59-4fec-a11c-3a228320cf32"). InnerVolumeSpecName "kube-api-access-78sj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.696777 4593 scope.go:117] "RemoveContainer" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" Jan 29 11:20:22 crc kubenswrapper[4593]: E0129 11:20:22.698897 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": container with ID starting with 453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab not found: ID does not exist" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.698929 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab"} err="failed to get container status \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": rpc error: code = NotFound desc = could not find container \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": container with ID starting with 453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab not found: ID does not exist" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.698951 4593 scope.go:117] "RemoveContainer" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" Jan 29 11:20:22 crc kubenswrapper[4593]: E0129 11:20:22.703103 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": container with ID starting with 958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325 not found: ID does not exist" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.703145 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325"} err="failed to get container status \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": rpc error: code = NotFound desc = could not find container \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": container with ID starting with 958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325 not found: ID does not exist" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.703171 4593 scope.go:117] "RemoveContainer" containerID="453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.707814 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab"} err="failed to get container status \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": rpc error: code = NotFound desc = could not find container \"453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab\": container with ID starting with 453751cb18ba3298e4ec519453a0895c0b26798f882543eb1bf0dc24cde66bab not found: ID does not exist" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.707857 4593 scope.go:117] "RemoveContainer" containerID="958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.708153 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325"} err="failed to get container status \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": rpc error: code = NotFound desc = could not find container \"958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325\": container with ID starting with 958d64e3389d4c961ffcd3fbfb4fa479df8235abc8fe5c15a4d3edf3d163b325 not found: ID does not exist" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.713742 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ea9a9cf-fb59-4fec-a11c-3a228320cf32" (UID: "8ea9a9cf-fb59-4fec-a11c-3a228320cf32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.717082 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.725833 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data" (OuterVolumeSpecName: "config-data") pod "8ea9a9cf-fb59-4fec-a11c-3a228320cf32" (UID: "8ea9a9cf-fb59-4fec-a11c-3a228320cf32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737484 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737540 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737644 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737757 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.737804 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") pod \"7aadd015-f714-41cf-b532-396d9f5f3946\" (UID: \"7aadd015-f714-41cf-b532-396d9f5f3946\") " Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.738283 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.738308 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.738320 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.738331 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78sj8\" (UniqueName: \"kubernetes.io/projected/8ea9a9cf-fb59-4fec-a11c-3a228320cf32-kube-api-access-78sj8\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.755752 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth" (OuterVolumeSpecName: "kube-api-access-xbtth") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "kube-api-access-xbtth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.841340 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbtth\" (UniqueName: \"kubernetes.io/projected/7aadd015-f714-41cf-b532-396d9f5f3946-kube-api-access-xbtth\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.852255 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.878365 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config" (OuterVolumeSpecName: "config") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.887738 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.908294 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.909183 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7aadd015-f714-41cf-b532-396d9f5f3946" (UID: "7aadd015-f714-41cf-b532-396d9f5f3946"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942419 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942449 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942462 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942471 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.942480 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7aadd015-f714-41cf-b532-396d9f5f3946-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.973215 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:22 crc kubenswrapper[4593]: I0129 11:20:22.996802 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.063731 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:23 crc kubenswrapper[4593]: E0129 11:20:23.064215 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-metadata" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064238 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-metadata" Jan 29 11:20:23 crc kubenswrapper[4593]: E0129 11:20:23.064254 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-log" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064261 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-log" Jan 29 11:20:23 crc kubenswrapper[4593]: E0129 11:20:23.064295 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="dnsmasq-dns" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064302 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="dnsmasq-dns" Jan 29 11:20:23 crc kubenswrapper[4593]: E0129 11:20:23.064319 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="init" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064324 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="init" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064489 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-log" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064505 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" containerName="dnsmasq-dns" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.064512 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" containerName="nova-metadata-metadata" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.065600 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.066884 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.069334 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.070056 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.103143 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea9a9cf-fb59-4fec-a11c-3a228320cf32" path="/var/lib/kubelet/pods/8ea9a9cf-fb59-4fec-a11c-3a228320cf32/volumes" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.249316 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.249801 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.251103 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.251345 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.251552 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.353239 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.353871 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.354049 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.354182 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.354432 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.355139 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.364478 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.366443 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.381047 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.389618 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") pod \"nova-metadata-0\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.400330 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.543850 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" event={"ID":"7aadd015-f714-41cf-b532-396d9f5f3946","Type":"ContainerDied","Data":"f371f618c4302fbf0bf3244208980a3b33a4e263434fd709be03f076a3036627"} Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.543944 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-9hb8w" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.544256 4593 scope.go:117] "RemoveContainer" containerID="71929b9f4271d72dbfcb871f40c2a2b36bba6325c1864b1f8ec830759d7bd059" Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.598673 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.612065 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-9hb8w"] Jan 29 11:20:23 crc kubenswrapper[4593]: I0129 11:20:23.702916 4593 scope.go:117] "RemoveContainer" containerID="d7d10b40887ad7cb3695100bfd7e2e09a54897e25591da02ac46e6c0d27cc415" Jan 29 11:20:24 crc kubenswrapper[4593]: I0129 11:20:24.150414 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:24 crc kubenswrapper[4593]: I0129 11:20:24.586231 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerStarted","Data":"35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d"} Jan 29 11:20:24 crc kubenswrapper[4593]: I0129 11:20:24.586562 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerStarted","Data":"4cefe4364c2588402ec5dd748f4b5e3fc4e65f94d005770bf05acdcf92ebff76"} Jan 29 11:20:24 crc kubenswrapper[4593]: I0129 11:20:24.911486 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:20:25 crc kubenswrapper[4593]: I0129 11:20:25.090511 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aadd015-f714-41cf-b532-396d9f5f3946" path="/var/lib/kubelet/pods/7aadd015-f714-41cf-b532-396d9f5f3946/volumes" Jan 29 11:20:25 crc kubenswrapper[4593]: I0129 11:20:25.600662 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerStarted","Data":"2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164"} Jan 29 11:20:25 crc kubenswrapper[4593]: I0129 11:20:25.628461 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.628435835 podStartE2EDuration="3.628435835s" podCreationTimestamp="2026-01-29 11:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:25.624054177 +0000 UTC m=+1291.497088368" watchObservedRunningTime="2026-01-29 11:20:25.628435835 +0000 UTC m=+1291.501470036" Jan 29 11:20:26 crc kubenswrapper[4593]: I0129 11:20:26.175031 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:20:28 crc kubenswrapper[4593]: I0129 11:20:28.400987 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:20:28 crc kubenswrapper[4593]: I0129 11:20:28.404791 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:20:28 crc kubenswrapper[4593]: I0129 11:20:28.643776 4593 generic.go:334] "Generic (PLEG): container finished" podID="ecc4cd76-a47d-4691-906f-d1617455f100" containerID="96bdd94d7fe01d27f9002652fb0e024d5e4216b747eecd5f1013e14f7c20a7f7" exitCode=0 Jan 29 11:20:28 crc kubenswrapper[4593]: I0129 11:20:28.643861 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jfk6z" event={"ID":"ecc4cd76-a47d-4691-906f-d1617455f100","Type":"ContainerDied","Data":"96bdd94d7fe01d27f9002652fb0e024d5e4216b747eecd5f1013e14f7c20a7f7"} Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.201452 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.401565 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") pod \"ecc4cd76-a47d-4691-906f-d1617455f100\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.401704 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") pod \"ecc4cd76-a47d-4691-906f-d1617455f100\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.401736 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") pod \"ecc4cd76-a47d-4691-906f-d1617455f100\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.401820 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") pod \"ecc4cd76-a47d-4691-906f-d1617455f100\" (UID: \"ecc4cd76-a47d-4691-906f-d1617455f100\") " Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.421117 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4" (OuterVolumeSpecName: "kube-api-access-7rlg4") pod "ecc4cd76-a47d-4691-906f-d1617455f100" (UID: "ecc4cd76-a47d-4691-906f-d1617455f100"). InnerVolumeSpecName "kube-api-access-7rlg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.433375 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts" (OuterVolumeSpecName: "scripts") pod "ecc4cd76-a47d-4691-906f-d1617455f100" (UID: "ecc4cd76-a47d-4691-906f-d1617455f100"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.451009 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data" (OuterVolumeSpecName: "config-data") pod "ecc4cd76-a47d-4691-906f-d1617455f100" (UID: "ecc4cd76-a47d-4691-906f-d1617455f100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.451782 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecc4cd76-a47d-4691-906f-d1617455f100" (UID: "ecc4cd76-a47d-4691-906f-d1617455f100"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.507660 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.507701 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.507714 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rlg4\" (UniqueName: \"kubernetes.io/projected/ecc4cd76-a47d-4691-906f-d1617455f100-kube-api-access-7rlg4\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.507729 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecc4cd76-a47d-4691-906f-d1617455f100-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.670305 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jfk6z" event={"ID":"ecc4cd76-a47d-4691-906f-d1617455f100","Type":"ContainerDied","Data":"40b85745aaf0431c0c3b188b6e870f9ab2cee2968144160c13e9e9930341c6fc"} Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.670696 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40b85745aaf0431c0c3b188b6e870f9ab2cee2968144160c13e9e9930341c6fc" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.670382 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jfk6z" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.831343 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.831405 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.863623 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.863842 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerName="nova-scheduler-scheduler" containerID="cri-o://660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c" gracePeriod=30 Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.874569 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.913425 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.913722 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-log" containerID="cri-o://35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d" gracePeriod=30 Jan 29 11:20:30 crc kubenswrapper[4593]: I0129 11:20:30.913889 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-metadata" containerID="cri-o://2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164" gracePeriod=30 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.111572 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:31 crc kubenswrapper[4593]: > Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686331 4593 generic.go:334] "Generic (PLEG): container finished" podID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerID="2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164" exitCode=0 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686374 4593 generic.go:334] "Generic (PLEG): container finished" podID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerID="35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d" exitCode=143 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686488 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerDied","Data":"2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164"} Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686555 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerDied","Data":"35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d"} Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.686617 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" containerID="cri-o://cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" gracePeriod=30 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.687120 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" containerID="cri-o://c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" gracePeriod=30 Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.693252 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Jan 29 11:20:31 crc kubenswrapper[4593]: I0129 11:20:31.693422 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.108384 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.235548 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.236903 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.237104 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.237425 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.237523 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") pod \"78c17a08-712a-47fb-a1eb-f26be532ce98\" (UID: \"78c17a08-712a-47fb-a1eb-f26be532ce98\") " Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.238799 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs" (OuterVolumeSpecName: "logs") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.250849 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb" (OuterVolumeSpecName: "kube-api-access-v8jgb") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "kube-api-access-v8jgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.275558 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.328078 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data" (OuterVolumeSpecName: "config-data") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.334903 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "78c17a08-712a-47fb-a1eb-f26be532ce98" (UID: "78c17a08-712a-47fb-a1eb-f26be532ce98"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340487 4593 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340514 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/78c17a08-712a-47fb-a1eb-f26be532ce98-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340525 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340538 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78c17a08-712a-47fb-a1eb-f26be532ce98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.340548 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8jgb\" (UniqueName: \"kubernetes.io/projected/78c17a08-712a-47fb-a1eb-f26be532ce98-kube-api-access-v8jgb\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.705846 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd09a34f-e8e0-45ab-8106-550772be304d" containerID="cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" exitCode=143 Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.705913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerDied","Data":"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594"} Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.708395 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"78c17a08-712a-47fb-a1eb-f26be532ce98","Type":"ContainerDied","Data":"4cefe4364c2588402ec5dd748f4b5e3fc4e65f94d005770bf05acdcf92ebff76"} Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.708428 4593 scope.go:117] "RemoveContainer" containerID="2a149a3cd3c416e532f08f09e3efa6137160f0dec84f0e59b848968641500164" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.708455 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.749423 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.752646 4593 scope.go:117] "RemoveContainer" containerID="35c4fb91bfd0ce4ebd4422950ffc22b955b4cb92b4cb7a470281bd92f4f21b4d" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.762399 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.775418 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:32 crc kubenswrapper[4593]: E0129 11:20:32.776079 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc4cd76-a47d-4691-906f-d1617455f100" containerName="nova-manage" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.776162 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc4cd76-a47d-4691-906f-d1617455f100" containerName="nova-manage" Jan 29 11:20:32 crc kubenswrapper[4593]: E0129 11:20:32.776259 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-metadata" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.776339 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-metadata" Jan 29 11:20:32 crc kubenswrapper[4593]: E0129 11:20:32.776420 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-log" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.776479 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-log" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.777402 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-log" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.777591 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc4cd76-a47d-4691-906f-d1617455f100" containerName="nova-manage" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.778163 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" containerName="nova-metadata-metadata" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.779530 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.787104 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.787151 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.790793 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.855354 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.855651 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.856153 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.856298 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.856383 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.957804 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.957887 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.957963 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.958035 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.958108 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.959942 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.965429 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.974669 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.975356 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:32 crc kubenswrapper[4593]: I0129 11:20:32.975494 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") pod \"nova-metadata-0\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " pod="openstack/nova-metadata-0" Jan 29 11:20:33 crc kubenswrapper[4593]: I0129 11:20:33.086243 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c17a08-712a-47fb-a1eb-f26be532ce98" path="/var/lib/kubelet/pods/78c17a08-712a-47fb-a1eb-f26be532ce98/volumes" Jan 29 11:20:33 crc kubenswrapper[4593]: I0129 11:20:33.108425 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:20:33 crc kubenswrapper[4593]: I0129 11:20:33.645418 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:20:33 crc kubenswrapper[4593]: W0129 11:20:33.658235 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa00230_26f8_4fa7_b32c_994ec82a6ac4.slice/crio-185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a WatchSource:0}: Error finding container 185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a: Status 404 returned error can't find the container with id 185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a Jan 29 11:20:33 crc kubenswrapper[4593]: I0129 11:20:33.725491 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerStarted","Data":"185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a"} Jan 29 11:20:34 crc kubenswrapper[4593]: I0129 11:20:34.743911 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerStarted","Data":"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df"} Jan 29 11:20:34 crc kubenswrapper[4593]: I0129 11:20:34.745391 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerStarted","Data":"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c"} Jan 29 11:20:34 crc kubenswrapper[4593]: I0129 11:20:34.771870 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.771844915 podStartE2EDuration="2.771844915s" podCreationTimestamp="2026-01-29 11:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:34.763046417 +0000 UTC m=+1300.636080608" watchObservedRunningTime="2026-01-29 11:20:34.771844915 +0000 UTC m=+1300.644879106" Jan 29 11:20:34 crc kubenswrapper[4593]: I0129 11:20:34.911829 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.760416 4593 generic.go:334] "Generic (PLEG): container finished" podID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerID="660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c" exitCode=0 Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.760757 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54be0c9a-2dea-467c-afa6-230000d9ccfa","Type":"ContainerDied","Data":"660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c"} Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.760921 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"54be0c9a-2dea-467c-afa6-230000d9ccfa","Type":"ContainerDied","Data":"7a4e7135bde371deba18f2e2d879e899cf14dcee993b634bcfe74d5b004e721e"} Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.760960 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a4e7135bde371deba18f2e2d879e899cf14dcee993b634bcfe74d5b004e721e" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.782743 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.886033 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") pod \"54be0c9a-2dea-467c-afa6-230000d9ccfa\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.886284 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") pod \"54be0c9a-2dea-467c-afa6-230000d9ccfa\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.886405 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") pod \"54be0c9a-2dea-467c-afa6-230000d9ccfa\" (UID: \"54be0c9a-2dea-467c-afa6-230000d9ccfa\") " Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.894970 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr" (OuterVolumeSpecName: "kube-api-access-7btbr") pod "54be0c9a-2dea-467c-afa6-230000d9ccfa" (UID: "54be0c9a-2dea-467c-afa6-230000d9ccfa"). InnerVolumeSpecName "kube-api-access-7btbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.916579 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54be0c9a-2dea-467c-afa6-230000d9ccfa" (UID: "54be0c9a-2dea-467c-afa6-230000d9ccfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.924089 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data" (OuterVolumeSpecName: "config-data") pod "54be0c9a-2dea-467c-afa6-230000d9ccfa" (UID: "54be0c9a-2dea-467c-afa6-230000d9ccfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.989573 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.991513 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54be0c9a-2dea-467c-afa6-230000d9ccfa-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:35 crc kubenswrapper[4593]: I0129 11:20:35.991554 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7btbr\" (UniqueName: \"kubernetes.io/projected/54be0c9a-2dea-467c-afa6-230000d9ccfa-kube-api-access-7btbr\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.767558 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.802541 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.818368 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.871954 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:36 crc kubenswrapper[4593]: E0129 11:20:36.872595 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerName="nova-scheduler-scheduler" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.872621 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerName="nova-scheduler-scheduler" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.872884 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" containerName="nova-scheduler-scheduler" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.873862 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.879742 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:20:36 crc kubenswrapper[4593]: I0129 11:20:36.887364 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.012830 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.013161 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.013322 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.086031 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54be0c9a-2dea-467c-afa6-230000d9ccfa" path="/var/lib/kubelet/pods/54be0c9a-2dea-467c-afa6-230000d9ccfa/volumes" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.115160 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.115356 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.115406 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.119599 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.120414 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.136518 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") pod \"nova-scheduler-0\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.205107 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:20:37 crc kubenswrapper[4593]: I0129 11:20:37.781206 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.109541 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.109619 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.781906 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813401 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd09a34f-e8e0-45ab-8106-550772be304d" containerID="c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" exitCode=0 Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813527 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerDied","Data":"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e"} Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813561 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd09a34f-e8e0-45ab-8106-550772be304d","Type":"ContainerDied","Data":"d33e85a542f161cdeff330ae3f58078f90938b3f287467787015c6695fd198e9"} Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813582 4593 scope.go:117] "RemoveContainer" containerID="c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.813851 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.823596 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40dd43f0-0621-4358-8019-b58cd5fbcc79","Type":"ContainerStarted","Data":"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058"} Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.823665 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40dd43f0-0621-4358-8019-b58cd5fbcc79","Type":"ContainerStarted","Data":"c94ac2729f1f8331d111e95fa7df8974b6fcb7da88f692f7369227d26b750286"} Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.849359 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.849276809 podStartE2EDuration="2.849276809s" podCreationTimestamp="2026-01-29 11:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:38.845130997 +0000 UTC m=+1304.718165188" watchObservedRunningTime="2026-01-29 11:20:38.849276809 +0000 UTC m=+1304.722311010" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.870071 4593 scope.go:117] "RemoveContainer" containerID="cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.899868 4593 scope.go:117] "RemoveContainer" containerID="c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" Jan 29 11:20:38 crc kubenswrapper[4593]: E0129 11:20:38.900236 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e\": container with ID starting with c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e not found: ID does not exist" containerID="c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.900275 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e"} err="failed to get container status \"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e\": rpc error: code = NotFound desc = could not find container \"c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e\": container with ID starting with c81b7688d239bdd13897f418ffeec3bb6a0ec1aa62a8c986ce8bd188ebb40d6e not found: ID does not exist" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.900302 4593 scope.go:117] "RemoveContainer" containerID="cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" Jan 29 11:20:38 crc kubenswrapper[4593]: E0129 11:20:38.900587 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594\": container with ID starting with cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594 not found: ID does not exist" containerID="cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.900618 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594"} err="failed to get container status \"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594\": rpc error: code = NotFound desc = could not find container \"cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594\": container with ID starting with cae1e9ac5b4b49b857f39e56a9ed6ae24fecf3dc4a8a8ec02b94e52110cb7594 not found: ID does not exist" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.953570 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") pod \"fd09a34f-e8e0-45ab-8106-550772be304d\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.953734 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") pod \"fd09a34f-e8e0-45ab-8106-550772be304d\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.953793 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") pod \"fd09a34f-e8e0-45ab-8106-550772be304d\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.953828 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") pod \"fd09a34f-e8e0-45ab-8106-550772be304d\" (UID: \"fd09a34f-e8e0-45ab-8106-550772be304d\") " Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.955331 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs" (OuterVolumeSpecName: "logs") pod "fd09a34f-e8e0-45ab-8106-550772be304d" (UID: "fd09a34f-e8e0-45ab-8106-550772be304d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.955771 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd09a34f-e8e0-45ab-8106-550772be304d-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.983065 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj" (OuterVolumeSpecName: "kube-api-access-9crfj") pod "fd09a34f-e8e0-45ab-8106-550772be304d" (UID: "fd09a34f-e8e0-45ab-8106-550772be304d"). InnerVolumeSpecName "kube-api-access-9crfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.994298 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd09a34f-e8e0-45ab-8106-550772be304d" (UID: "fd09a34f-e8e0-45ab-8106-550772be304d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:38 crc kubenswrapper[4593]: I0129 11:20:38.994399 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data" (OuterVolumeSpecName: "config-data") pod "fd09a34f-e8e0-45ab-8106-550772be304d" (UID: "fd09a34f-e8e0-45ab-8106-550772be304d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.058147 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crfj\" (UniqueName: \"kubernetes.io/projected/fd09a34f-e8e0-45ab-8106-550772be304d-kube-api-access-9crfj\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.058184 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.058194 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd09a34f-e8e0-45ab-8106-550772be304d-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.137821 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.149665 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.186207 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:39 crc kubenswrapper[4593]: E0129 11:20:39.187018 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.187158 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" Jan 29 11:20:39 crc kubenswrapper[4593]: E0129 11:20:39.187259 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.187331 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.187676 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-api" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.187816 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" containerName="nova-api-log" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.189362 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.192257 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.207911 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.429290 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.429441 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.429522 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.429619 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.530654 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.531110 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.531327 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.531495 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.531963 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.536550 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.560889 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.564242 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") pod \"nova-api-0\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " pod="openstack/nova-api-0" Jan 29 11:20:39 crc kubenswrapper[4593]: I0129 11:20:39.807236 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.297666 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.853136 4593 generic.go:334] "Generic (PLEG): container finished" podID="c4d30b0b-741b-4275-bcd3-65f27a294d54" containerID="becc277c4dab17e63d11203d4fe1da3af35724523a182bc72abe031b3a628c8a" exitCode=0 Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.853261 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" event={"ID":"c4d30b0b-741b-4275-bcd3-65f27a294d54","Type":"ContainerDied","Data":"becc277c4dab17e63d11203d4fe1da3af35724523a182bc72abe031b3a628c8a"} Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.855490 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerStarted","Data":"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db"} Jan 29 11:20:40 crc kubenswrapper[4593]: I0129 11:20:40.855531 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerStarted","Data":"f7db3d2de4fdf878656547d9c3589d171005e852c5677ab4b1055551daeb9535"} Jan 29 11:20:41 crc kubenswrapper[4593]: I0129 11:20:41.086496 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd09a34f-e8e0-45ab-8106-550772be304d" path="/var/lib/kubelet/pods/fd09a34f-e8e0-45ab-8106-550772be304d/volumes" Jan 29 11:20:41 crc kubenswrapper[4593]: I0129 11:20:41.105477 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:41 crc kubenswrapper[4593]: > Jan 29 11:20:41 crc kubenswrapper[4593]: I0129 11:20:41.871790 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerStarted","Data":"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5"} Jan 29 11:20:41 crc kubenswrapper[4593]: I0129 11:20:41.901751 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.901716146 podStartE2EDuration="2.901716146s" podCreationTimestamp="2026-01-29 11:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:41.896885514 +0000 UTC m=+1307.769919715" watchObservedRunningTime="2026-01-29 11:20:41.901716146 +0000 UTC m=+1307.774750337" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.207327 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.262362 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.357119 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") pod \"c4d30b0b-741b-4275-bcd3-65f27a294d54\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.357180 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") pod \"c4d30b0b-741b-4275-bcd3-65f27a294d54\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.357291 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") pod \"c4d30b0b-741b-4275-bcd3-65f27a294d54\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.357406 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") pod \"c4d30b0b-741b-4275-bcd3-65f27a294d54\" (UID: \"c4d30b0b-741b-4275-bcd3-65f27a294d54\") " Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.363615 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz" (OuterVolumeSpecName: "kube-api-access-cm2nz") pod "c4d30b0b-741b-4275-bcd3-65f27a294d54" (UID: "c4d30b0b-741b-4275-bcd3-65f27a294d54"). InnerVolumeSpecName "kube-api-access-cm2nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.367842 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts" (OuterVolumeSpecName: "scripts") pod "c4d30b0b-741b-4275-bcd3-65f27a294d54" (UID: "c4d30b0b-741b-4275-bcd3-65f27a294d54"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.389547 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4d30b0b-741b-4275-bcd3-65f27a294d54" (UID: "c4d30b0b-741b-4275-bcd3-65f27a294d54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.404433 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data" (OuterVolumeSpecName: "config-data") pod "c4d30b0b-741b-4275-bcd3-65f27a294d54" (UID: "c4d30b0b-741b-4275-bcd3-65f27a294d54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.462846 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm2nz\" (UniqueName: \"kubernetes.io/projected/c4d30b0b-741b-4275-bcd3-65f27a294d54-kube-api-access-cm2nz\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.462896 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.462910 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.462921 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d30b0b-741b-4275-bcd3-65f27a294d54-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.881128 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" event={"ID":"c4d30b0b-741b-4275-bcd3-65f27a294d54","Type":"ContainerDied","Data":"8dc46203d3c6c5d1cde15f072717e4362e4df9ca33b0077c8bfb3bc44346b805"} Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.881216 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dc46203d3c6c5d1cde15f072717e4362e4df9ca33b0077c8bfb3bc44346b805" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.881158 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wc9fh" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.984613 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:20:42 crc kubenswrapper[4593]: E0129 11:20:42.985046 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d30b0b-741b-4275-bcd3-65f27a294d54" containerName="nova-cell1-conductor-db-sync" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.985065 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d30b0b-741b-4275-bcd3-65f27a294d54" containerName="nova-cell1-conductor-db-sync" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.985297 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d30b0b-741b-4275-bcd3-65f27a294d54" containerName="nova-cell1-conductor-db-sync" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.986081 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:42 crc kubenswrapper[4593]: I0129 11:20:42.989906 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.006512 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.074494 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.074703 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsgjk\" (UniqueName: \"kubernetes.io/projected/bee10dce-c68f-47f4-84e0-623f276964d8-kube-api-access-gsgjk\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.075149 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.109771 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.109835 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.176472 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.176670 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsgjk\" (UniqueName: \"kubernetes.io/projected/bee10dce-c68f-47f4-84e0-623f276964d8-kube-api-access-gsgjk\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.176720 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.181972 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.182687 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bee10dce-c68f-47f4-84e0-623f276964d8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.199177 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsgjk\" (UniqueName: \"kubernetes.io/projected/bee10dce-c68f-47f4-84e0-623f276964d8-kube-api-access-gsgjk\") pod \"nova-cell1-conductor-0\" (UID: \"bee10dce-c68f-47f4-84e0-623f276964d8\") " pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.309802 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:43 crc kubenswrapper[4593]: I0129 11:20:43.802541 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 29 11:20:44 crc kubenswrapper[4593]: I0129 11:20:44.021126 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bee10dce-c68f-47f4-84e0-623f276964d8","Type":"ContainerStarted","Data":"5522e839542cc231908bac44f370a5152779d196633377928af10d74f71a95b0"} Jan 29 11:20:44 crc kubenswrapper[4593]: I0129 11:20:44.125905 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:44 crc kubenswrapper[4593]: I0129 11:20:44.126008 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:44 crc kubenswrapper[4593]: I0129 11:20:44.910468 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-fbf566cdb-kbm9z" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.042813 4593 generic.go:334] "Generic (PLEG): container finished" podID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerID="79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8" exitCode=137 Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.042838 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8"} Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.048965 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bee10dce-c68f-47f4-84e0-623f276964d8","Type":"ContainerStarted","Data":"4d614dc400670f15f9dd67948b7cdfabe334a78d7e990ee23c2014481f120b38"} Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.049119 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.086823 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.086801608 podStartE2EDuration="3.086801608s" podCreationTimestamp="2026-01-29 11:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:45.069465757 +0000 UTC m=+1310.942499988" watchObservedRunningTime="2026-01-29 11:20:45.086801608 +0000 UTC m=+1310.959835799" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.518896 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561303 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561385 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561523 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561693 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561754 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561781 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.561812 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") pod \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\" (UID: \"b9761a4f-8669-4e74-9f8e-ed8b9778af11\") " Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.562829 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs" (OuterVolumeSpecName: "logs") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.580409 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr" (OuterVolumeSpecName: "kube-api-access-5bjjr") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "kube-api-access-5bjjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.582274 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.615427 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts" (OuterVolumeSpecName: "scripts") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.629749 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.638483 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data" (OuterVolumeSpecName: "config-data") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664128 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664191 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664212 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bjjr\" (UniqueName: \"kubernetes.io/projected/b9761a4f-8669-4e74-9f8e-ed8b9778af11-kube-api-access-5bjjr\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664247 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664259 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9761a4f-8669-4e74-9f8e-ed8b9778af11-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664269 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b9761a4f-8669-4e74-9f8e-ed8b9778af11-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.664506 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b9761a4f-8669-4e74-9f8e-ed8b9778af11" (UID: "b9761a4f-8669-4e74-9f8e-ed8b9778af11"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:45 crc kubenswrapper[4593]: I0129 11:20:45.766665 4593 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9761a4f-8669-4e74-9f8e-ed8b9778af11-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.063499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-fbf566cdb-kbm9z" event={"ID":"b9761a4f-8669-4e74-9f8e-ed8b9778af11","Type":"ContainerDied","Data":"ce4a773b0ca614eb00194b9785007fb66ed555cdb9faf1064f6db03538dbdfaf"} Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.063604 4593 scope.go:117] "RemoveContainer" containerID="3d261a3c68b7921bd914d1e7f66292aa43d7dcf78e137210f6cac9b61a927909" Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.064558 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-fbf566cdb-kbm9z" Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.112730 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.123375 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-fbf566cdb-kbm9z"] Jan 29 11:20:46 crc kubenswrapper[4593]: I0129 11:20:46.259499 4593 scope.go:117] "RemoveContainer" containerID="79e5fad4ce8a136539fe157f20b007cd9dda01813dc5bd26b79f98167ce8f3c8" Jan 29 11:20:47 crc kubenswrapper[4593]: I0129 11:20:47.088095 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" path="/var/lib/kubelet/pods/b9761a4f-8669-4e74-9f8e-ed8b9778af11/volumes" Jan 29 11:20:47 crc kubenswrapper[4593]: I0129 11:20:47.206103 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 11:20:47 crc kubenswrapper[4593]: I0129 11:20:47.238439 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 11:20:48 crc kubenswrapper[4593]: I0129 11:20:48.121375 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 11:20:49 crc kubenswrapper[4593]: I0129 11:20:49.808320 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:20:49 crc kubenswrapper[4593]: I0129 11:20:49.808731 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:20:50 crc kubenswrapper[4593]: I0129 11:20:50.891869 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:50 crc kubenswrapper[4593]: I0129 11:20:50.891869 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:20:51 crc kubenswrapper[4593]: I0129 11:20:51.105460 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" probeResult="failure" output=< Jan 29 11:20:51 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:20:51 crc kubenswrapper[4593]: > Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.024288 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.133884 4593 generic.go:334] "Generic (PLEG): container finished" podID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerID="e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" exitCode=137 Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.133950 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8","Type":"ContainerDied","Data":"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69"} Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.133983 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8","Type":"ContainerDied","Data":"afca7bf4b299e69d695725ee22c529f3ea659c864ce859245236b6ced858cb90"} Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.134005 4593 scope.go:117] "RemoveContainer" containerID="e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.134164 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.162584 4593 scope.go:117] "RemoveContainer" containerID="e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.163436 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69\": container with ID starting with e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69 not found: ID does not exist" containerID="e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.163486 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69"} err="failed to get container status \"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69\": rpc error: code = NotFound desc = could not find container \"e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69\": container with ID starting with e73184b2646dc788b31f373cb46f214041bd4afe8f28004c1f0ce17b08c20d69 not found: ID does not exist" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.174252 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") pod \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.174591 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") pod \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.174670 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") pod \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\" (UID: \"d3bc8fe6-dc7c-4731-902d-67d12a0bfef8\") " Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.185191 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt" (OuterVolumeSpecName: "kube-api-access-5kpzt") pod "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" (UID: "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8"). InnerVolumeSpecName "kube-api-access-5kpzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.219968 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data" (OuterVolumeSpecName: "config-data") pod "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" (UID: "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.244811 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" (UID: "d3bc8fe6-dc7c-4731-902d-67d12a0bfef8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.277402 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kpzt\" (UniqueName: \"kubernetes.io/projected/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-kube-api-access-5kpzt\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.277443 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.277453 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.484725 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.505687 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.525582 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.526235 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526268 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.526306 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526315 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.526328 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526336 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: E0129 11:20:52.526388 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon-log" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526398 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon-log" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526668 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" containerName="nova-cell1-novncproxy-novncproxy" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526691 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526703 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526722 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon-log" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.526739 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.527686 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.534540 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.536269 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.536829 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.537615 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.685432 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.685946 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.686080 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.686115 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7m4l\" (UniqueName: \"kubernetes.io/projected/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-kube-api-access-c7m4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.686264 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792619 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792700 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7m4l\" (UniqueName: \"kubernetes.io/projected/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-kube-api-access-c7m4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792782 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792881 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.792986 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.798532 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.810582 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.822538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.824210 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.832411 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7m4l\" (UniqueName: \"kubernetes.io/projected/0b25e9a9-4f12-4b7f-9001-74b6c3feb118-kube-api-access-c7m4l\") pod \"nova-cell1-novncproxy-0\" (UID: \"0b25e9a9-4f12-4b7f-9001-74b6c3feb118\") " pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:52 crc kubenswrapper[4593]: I0129 11:20:52.914126 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.089595 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3bc8fe6-dc7c-4731-902d-67d12a0bfef8" path="/var/lib/kubelet/pods/d3bc8fe6-dc7c-4731-902d-67d12a0bfef8/volumes" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.121291 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.128149 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.129252 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.190078 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.352123 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 29 11:20:53 crc kubenswrapper[4593]: I0129 11:20:53.564509 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 29 11:20:54 crc kubenswrapper[4593]: I0129 11:20:54.179262 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b25e9a9-4f12-4b7f-9001-74b6c3feb118","Type":"ContainerStarted","Data":"ae8d97c1afea9ef91d94a960a07b3449ddd6e5831b50f7f17248b8fdd70aa718"} Jan 29 11:20:54 crc kubenswrapper[4593]: I0129 11:20:54.179521 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0b25e9a9-4f12-4b7f-9001-74b6c3feb118","Type":"ContainerStarted","Data":"1a4d9a57fcbf76afd97da28948543e0ee1cacf12ce28e788ed4aadf97075d766"} Jan 29 11:20:54 crc kubenswrapper[4593]: I0129 11:20:54.203740 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.203715234 podStartE2EDuration="2.203715234s" podCreationTimestamp="2026-01-29 11:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:20:54.199062928 +0000 UTC m=+1320.072097119" watchObservedRunningTime="2026-01-29 11:20:54.203715234 +0000 UTC m=+1320.076749425" Jan 29 11:20:57 crc kubenswrapper[4593]: I0129 11:20:57.914552 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:20:59 crc kubenswrapper[4593]: I0129 11:20:59.813291 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:20:59 crc kubenswrapper[4593]: I0129 11:20:59.816375 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:20:59 crc kubenswrapper[4593]: I0129 11:20:59.817456 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:20:59 crc kubenswrapper[4593]: I0129 11:20:59.829276 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.124879 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.179973 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.250056 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.253296 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.578032 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:21:00 crc kubenswrapper[4593]: E0129 11:21:00.578568 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.578603 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9761a4f-8669-4e74-9f8e-ed8b9778af11" containerName="horizon" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.580138 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.626985 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724579 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724679 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724727 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724766 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724897 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.724958 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826489 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826617 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826688 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826752 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.826794 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828021 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828053 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828624 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828801 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.828794 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.866521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") pod \"dnsmasq-dns-cd5cbd7b9-q9gws\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:00 crc kubenswrapper[4593]: I0129 11:21:00.938590 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:01 crc kubenswrapper[4593]: I0129 11:21:01.325173 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:21:01 crc kubenswrapper[4593]: I0129 11:21:01.325717 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4l8n" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" containerID="cri-o://24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" gracePeriod=2 Jan 29 11:21:01 crc kubenswrapper[4593]: I0129 11:21:01.494729 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:21:01 crc kubenswrapper[4593]: E0129 11:21:01.678821 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9194cbfb_27b9_47e8_90eb_64b9391d0b07.slice/crio-24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9194cbfb_27b9_47e8_90eb_64b9391d0b07.slice/crio-conmon-24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.030070 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.168954 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") pod \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.169012 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") pod \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.169040 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") pod \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\" (UID: \"9194cbfb-27b9-47e8-90eb-64b9391d0b07\") " Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.172219 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities" (OuterVolumeSpecName: "utilities") pod "9194cbfb-27b9-47e8-90eb-64b9391d0b07" (UID: "9194cbfb-27b9-47e8-90eb-64b9391d0b07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.207824 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg" (OuterVolumeSpecName: "kube-api-access-9pvlg") pod "9194cbfb-27b9-47e8-90eb-64b9391d0b07" (UID: "9194cbfb-27b9-47e8-90eb-64b9391d0b07"). InnerVolumeSpecName "kube-api-access-9pvlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.272423 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pvlg\" (UniqueName: \"kubernetes.io/projected/9194cbfb-27b9-47e8-90eb-64b9391d0b07-kube-api-access-9pvlg\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.272461 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.285308 4593 generic.go:334] "Generic (PLEG): container finished" podID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerID="96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab" exitCode=0 Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.285412 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerDied","Data":"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab"} Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.285461 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerStarted","Data":"c6f1f6dc4fba44b238c92a14ad6df982c542f3af9ec19723b99a766da8d106d2"} Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.320086 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9194cbfb-27b9-47e8-90eb-64b9391d0b07" (UID: "9194cbfb-27b9-47e8-90eb-64b9391d0b07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.330521 4593 generic.go:334] "Generic (PLEG): container finished" podID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerID="24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" exitCode=0 Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.330815 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2"} Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.330885 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4l8n" event={"ID":"9194cbfb-27b9-47e8-90eb-64b9391d0b07","Type":"ContainerDied","Data":"5ea6d9d61fd2cf95d30b451aea020cc55aa6add991037bc5209ce7d2a046ef7e"} Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.330912 4593 scope.go:117] "RemoveContainer" containerID="24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.331205 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4l8n" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.375907 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9194cbfb-27b9-47e8-90eb-64b9391d0b07-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.452981 4593 scope.go:117] "RemoveContainer" containerID="01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.456431 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.468331 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k4l8n"] Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.506349 4593 scope.go:117] "RemoveContainer" containerID="193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.556253 4593 scope.go:117] "RemoveContainer" containerID="ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.587883 4593 scope.go:117] "RemoveContainer" containerID="24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" Jan 29 11:21:02 crc kubenswrapper[4593]: E0129 11:21:02.590660 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2\": container with ID starting with 24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2 not found: ID does not exist" containerID="24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.590727 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2"} err="failed to get container status \"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2\": rpc error: code = NotFound desc = could not find container \"24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2\": container with ID starting with 24a48dac79f9737c737a9a6c9feb17fa992bbfb7616bde5d11369bae535e02b2 not found: ID does not exist" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.590755 4593 scope.go:117] "RemoveContainer" containerID="01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" Jan 29 11:21:02 crc kubenswrapper[4593]: E0129 11:21:02.591224 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95\": container with ID starting with 01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95 not found: ID does not exist" containerID="01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.591272 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95"} err="failed to get container status \"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95\": rpc error: code = NotFound desc = could not find container \"01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95\": container with ID starting with 01cfca19ba6e7095e676495e45dbce66f5a74d2b87eaab4f83bc77de55811a95 not found: ID does not exist" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.591291 4593 scope.go:117] "RemoveContainer" containerID="193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f" Jan 29 11:21:02 crc kubenswrapper[4593]: E0129 11:21:02.591750 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f\": container with ID starting with 193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f not found: ID does not exist" containerID="193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.591799 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f"} err="failed to get container status \"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f\": rpc error: code = NotFound desc = could not find container \"193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f\": container with ID starting with 193f9b95fdc94b467f23b2f72d7dfa0f28f6b17c0525596eef4f9076227ed84f not found: ID does not exist" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.591831 4593 scope.go:117] "RemoveContainer" containerID="ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410" Jan 29 11:21:02 crc kubenswrapper[4593]: E0129 11:21:02.600228 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410\": container with ID starting with ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410 not found: ID does not exist" containerID="ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.600533 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410"} err="failed to get container status \"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410\": rpc error: code = NotFound desc = could not find container \"ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410\": container with ID starting with ba88dc4008912aff189fbe9ab60d1200804baf565d0d0ee6b15f03364bbef410 not found: ID does not exist" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.914866 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:21:02 crc kubenswrapper[4593]: I0129 11:21:02.954378 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.085406 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" path="/var/lib/kubelet/pods/9194cbfb-27b9-47e8-90eb-64b9391d0b07/volumes" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.346293 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerStarted","Data":"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d"} Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.346343 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.367334 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" podStartSLOduration=3.367308115 podStartE2EDuration="3.367308115s" podCreationTimestamp="2026-01-29 11:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:03.365531476 +0000 UTC m=+1329.238565667" watchObservedRunningTime="2026-01-29 11:21:03.367308115 +0000 UTC m=+1329.240342306" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.377298 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.522487 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.528079 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="sg-core" containerID="cri-o://ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.528122 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-notification-agent" containerID="cri-o://1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.528079 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="proxy-httpd" containerID="cri-o://5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.529216 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-central-agent" containerID="cri-o://718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.569667 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:21:03 crc kubenswrapper[4593]: E0129 11:21:03.570088 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="extract-utilities" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570108 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="extract-utilities" Jan 29 11:21:03 crc kubenswrapper[4593]: E0129 11:21:03.570124 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570130 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: E0129 11:21:03.570146 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="extract-content" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570153 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="extract-content" Jan 29 11:21:03 crc kubenswrapper[4593]: E0129 11:21:03.570174 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570180 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570373 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570396 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.570406 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.571035 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.573143 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.573338 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.599911 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.615298 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.615409 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.615541 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.615660 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.635474 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.635695 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" containerID="cri-o://339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.635978 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" containerID="cri-o://e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" gracePeriod=30 Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.717592 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.717765 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.717819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.717866 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.723615 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.724572 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.732973 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.742181 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") pod \"nova-cell1-cell-mapping-4klpz\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:03 crc kubenswrapper[4593]: I0129 11:21:03.937194 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359020 4593 generic.go:334] "Generic (PLEG): container finished" podID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerID="5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" exitCode=0 Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359428 4593 generic.go:334] "Generic (PLEG): container finished" podID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerID="ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" exitCode=2 Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359441 4593 generic.go:334] "Generic (PLEG): container finished" podID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerID="718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" exitCode=0 Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359205 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9"} Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359552 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b"} Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.359569 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050"} Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.362925 4593 generic.go:334] "Generic (PLEG): container finished" podID="ec186581-a9e6-46bb-9479-118d17b02d68" containerID="339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" exitCode=143 Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.362980 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerDied","Data":"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db"} Jan 29 11:21:04 crc kubenswrapper[4593]: I0129 11:21:04.480381 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:21:05 crc kubenswrapper[4593]: I0129 11:21:05.382559 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4klpz" event={"ID":"39f1974c-39c2-48ab-96f4-ad9b138bdd2a","Type":"ContainerStarted","Data":"1ea0d35aaa814eafe90d3b552ce2cc9ecd1b47dc4d9629fa6b4ad38749d52cc1"} Jan 29 11:21:05 crc kubenswrapper[4593]: I0129 11:21:05.383620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4klpz" event={"ID":"39f1974c-39c2-48ab-96f4-ad9b138bdd2a","Type":"ContainerStarted","Data":"8d964d0f6fd7a3a0690290e5907b2f72debcae58f7a1f3f8fa117ebd225127d0"} Jan 29 11:21:05 crc kubenswrapper[4593]: I0129 11:21:05.415060 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4klpz" podStartSLOduration=2.415033965 podStartE2EDuration="2.415033965s" podCreationTimestamp="2026-01-29 11:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:05.402487835 +0000 UTC m=+1331.275522026" watchObservedRunningTime="2026-01-29 11:21:05.415033965 +0000 UTC m=+1331.288068156" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.242238 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.315766 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") pod \"ec186581-a9e6-46bb-9479-118d17b02d68\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.316865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") pod \"ec186581-a9e6-46bb-9479-118d17b02d68\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.317998 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") pod \"ec186581-a9e6-46bb-9479-118d17b02d68\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.318029 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") pod \"ec186581-a9e6-46bb-9479-118d17b02d68\" (UID: \"ec186581-a9e6-46bb-9479-118d17b02d68\") " Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.320516 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs" (OuterVolumeSpecName: "logs") pod "ec186581-a9e6-46bb-9479-118d17b02d68" (UID: "ec186581-a9e6-46bb-9479-118d17b02d68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.324088 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2" (OuterVolumeSpecName: "kube-api-access-sgrs2") pod "ec186581-a9e6-46bb-9479-118d17b02d68" (UID: "ec186581-a9e6-46bb-9479-118d17b02d68"). InnerVolumeSpecName "kube-api-access-sgrs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.395863 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data" (OuterVolumeSpecName: "config-data") pod "ec186581-a9e6-46bb-9479-118d17b02d68" (UID: "ec186581-a9e6-46bb-9479-118d17b02d68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.421962 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec186581-a9e6-46bb-9479-118d17b02d68-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.421989 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.421999 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgrs2\" (UniqueName: \"kubernetes.io/projected/ec186581-a9e6-46bb-9479-118d17b02d68-kube-api-access-sgrs2\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.423896 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec186581-a9e6-46bb-9479-118d17b02d68" (UID: "ec186581-a9e6-46bb-9479-118d17b02d68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424238 4593 generic.go:334] "Generic (PLEG): container finished" podID="ec186581-a9e6-46bb-9479-118d17b02d68" containerID="e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" exitCode=0 Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424299 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerDied","Data":"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5"} Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424326 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec186581-a9e6-46bb-9479-118d17b02d68","Type":"ContainerDied","Data":"f7db3d2de4fdf878656547d9c3589d171005e852c5677ab4b1055551daeb9535"} Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424342 4593 scope.go:117] "RemoveContainer" containerID="e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.424375 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.497858 4593 scope.go:117] "RemoveContainer" containerID="339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.502285 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.520988 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.524401 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec186581-a9e6-46bb-9479-118d17b02d68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.553295 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.553810 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.553825 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9194cbfb-27b9-47e8-90eb-64b9391d0b07" containerName="registry-server" Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.553847 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.553853 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.553863 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.553869 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.554037 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-log" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.554060 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" containerName="nova-api-api" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.555129 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.561271 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.561491 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.565269 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.578416 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.579956 4593 scope.go:117] "RemoveContainer" containerID="e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.580749 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5\": container with ID starting with e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5 not found: ID does not exist" containerID="e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.580788 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5"} err="failed to get container status \"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5\": rpc error: code = NotFound desc = could not find container \"e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5\": container with ID starting with e2dff2a6a81eaa182c9da8785f80eca46bc877d24f3ff7ddcedb8630f6e64bf5 not found: ID does not exist" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.580832 4593 scope.go:117] "RemoveContainer" containerID="339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" Jan 29 11:21:07 crc kubenswrapper[4593]: E0129 11:21:07.581284 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db\": container with ID starting with 339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db not found: ID does not exist" containerID="339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.581318 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db"} err="failed to get container status \"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db\": rpc error: code = NotFound desc = could not find container \"339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db\": container with ID starting with 339836b893ef773758f5cc7b98358356a20301a3f047614f5c37232d52e5e9db not found: ID does not exist" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.625974 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626080 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626126 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626157 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626223 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.626331 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.728490 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.728979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729025 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729047 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729072 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729170 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.729888 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.734310 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.735367 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.736143 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.736823 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.745757 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") pod \"nova-api-0\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " pod="openstack/nova-api-0" Jan 29 11:21:07 crc kubenswrapper[4593]: I0129 11:21:07.897913 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.454551 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:08 crc kubenswrapper[4593]: W0129 11:21:08.476105 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode880ed3e_b1e4_40f6_bd7a_45b5e0e1c2b6.slice/crio-ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a WatchSource:0}: Error finding container ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a: Status 404 returned error can't find the container with id ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.754380 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.853838 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.853901 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.853967 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.853984 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.854026 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.854052 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.854075 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.854134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") pod \"934ccdca-f1e6-43d2-af69-2efb205bf387\" (UID: \"934ccdca-f1e6-43d2-af69-2efb205bf387\") " Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.863480 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.864361 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.873889 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z" (OuterVolumeSpecName: "kube-api-access-9q87z") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "kube-api-access-9q87z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.876100 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts" (OuterVolumeSpecName: "scripts") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.956884 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.957206 4593 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.957314 4593 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/934ccdca-f1e6-43d2-af69-2efb205bf387-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.957410 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q87z\" (UniqueName: \"kubernetes.io/projected/934ccdca-f1e6-43d2-af69-2efb205bf387-kube-api-access-9q87z\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.963936 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:08 crc kubenswrapper[4593]: I0129 11:21:08.997486 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.065085 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.065127 4593 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.072017 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data" (OuterVolumeSpecName: "config-data") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.077472 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "934ccdca-f1e6-43d2-af69-2efb205bf387" (UID: "934ccdca-f1e6-43d2-af69-2efb205bf387"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.099574 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec186581-a9e6-46bb-9479-118d17b02d68" path="/var/lib/kubelet/pods/ec186581-a9e6-46bb-9479-118d17b02d68/volumes" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.167058 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.167318 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934ccdca-f1e6-43d2-af69-2efb205bf387-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448212 4593 generic.go:334] "Generic (PLEG): container finished" podID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerID="1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" exitCode=0 Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448538 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448565 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"934ccdca-f1e6-43d2-af69-2efb205bf387","Type":"ContainerDied","Data":"4dce39e9f6258739668c6759897048e09e8458a8965cc4d5beb204c4759ad763"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448582 4593 scope.go:117] "RemoveContainer" containerID="5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.448722 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.453407 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerStarted","Data":"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.453455 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerStarted","Data":"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.453470 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerStarted","Data":"ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a"} Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.483905 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.484250 4593 scope.go:117] "RemoveContainer" containerID="ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.533542 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.535477 4593 scope.go:117] "RemoveContainer" containerID="1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.540068 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5400305469999997 podStartE2EDuration="2.540030547s" podCreationTimestamp="2026-01-29 11:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:09.517608339 +0000 UTC m=+1335.390642540" watchObservedRunningTime="2026-01-29 11:21:09.540030547 +0000 UTC m=+1335.413064738" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.589782 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.590320 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="sg-core" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590337 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="sg-core" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.590352 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-central-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590358 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-central-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.590365 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-notification-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590371 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-notification-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.590397 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="proxy-httpd" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590403 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="proxy-httpd" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590587 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="sg-core" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590609 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-central-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590621 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="proxy-httpd" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.590647 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" containerName="ceilometer-notification-agent" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.592653 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.597070 4593 scope.go:117] "RemoveContainer" containerID="718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.599213 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.599401 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.599497 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.601763 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.643920 4593 scope.go:117] "RemoveContainer" containerID="5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.644349 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9\": container with ID starting with 5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9 not found: ID does not exist" containerID="5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.644384 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9"} err="failed to get container status \"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9\": rpc error: code = NotFound desc = could not find container \"5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9\": container with ID starting with 5650870c53a815d139ee07b273db9e4da617bca758fc88b27ec7225ece9545c9 not found: ID does not exist" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.644406 4593 scope.go:117] "RemoveContainer" containerID="ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.644764 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b\": container with ID starting with ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b not found: ID does not exist" containerID="ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.644808 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b"} err="failed to get container status \"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b\": rpc error: code = NotFound desc = could not find container \"ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b\": container with ID starting with ccb1cce5f72a27026fa0dff03cca969d96af413b780e118d7f695f65f57ee35b not found: ID does not exist" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.644838 4593 scope.go:117] "RemoveContainer" containerID="1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.645142 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f\": container with ID starting with 1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f not found: ID does not exist" containerID="1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.645176 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f"} err="failed to get container status \"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f\": rpc error: code = NotFound desc = could not find container \"1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f\": container with ID starting with 1014e7c08fad200b51dc9f731c6b2a97edba268c54e461a9ca8ef7f2d5441a7f not found: ID does not exist" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.645193 4593 scope.go:117] "RemoveContainer" containerID="718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" Jan 29 11:21:09 crc kubenswrapper[4593]: E0129 11:21:09.645477 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050\": container with ID starting with 718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050 not found: ID does not exist" containerID="718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.645505 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050"} err="failed to get container status \"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050\": rpc error: code = NotFound desc = could not find container \"718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050\": container with ID starting with 718067f3b9f8669b499eaa09968b871882953292383cd9cadbaa67bc9b808050 not found: ID does not exist" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.674377 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.674679 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.674831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-run-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.674942 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-scripts\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.675050 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9tw\" (UniqueName: \"kubernetes.io/projected/8581bb16-8d35-4521-8886-3c71554a3a4d-kube-api-access-6t9tw\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.675159 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.675252 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-config-data\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.675330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-log-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.776773 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777159 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777244 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-run-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777369 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-scripts\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777490 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9tw\" (UniqueName: \"kubernetes.io/projected/8581bb16-8d35-4521-8886-3c71554a3a4d-kube-api-access-6t9tw\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777610 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777819 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-config-data\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.777942 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-log-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.778148 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-run-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.778417 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8581bb16-8d35-4521-8886-3c71554a3a4d-log-httpd\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.783280 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.783548 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.783889 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.786563 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-scripts\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.788206 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8581bb16-8d35-4521-8886-3c71554a3a4d-config-data\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.801397 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9tw\" (UniqueName: \"kubernetes.io/projected/8581bb16-8d35-4521-8886-3c71554a3a4d-kube-api-access-6t9tw\") pod \"ceilometer-0\" (UID: \"8581bb16-8d35-4521-8886-3c71554a3a4d\") " pod="openstack/ceilometer-0" Jan 29 11:21:09 crc kubenswrapper[4593]: I0129 11:21:09.928928 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 29 11:21:10 crc kubenswrapper[4593]: I0129 11:21:10.451466 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 29 11:21:10 crc kubenswrapper[4593]: W0129 11:21:10.455465 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8581bb16_8d35_4521_8886_3c71554a3a4d.slice/crio-f6bd3a6530e1c82fd552581b2874e176186933aaefba9d871c4f8370d018c933 WatchSource:0}: Error finding container f6bd3a6530e1c82fd552581b2874e176186933aaefba9d871c4f8370d018c933: Status 404 returned error can't find the container with id f6bd3a6530e1c82fd552581b2874e176186933aaefba9d871c4f8370d018c933 Jan 29 11:21:10 crc kubenswrapper[4593]: I0129 11:21:10.894939 4593 scope.go:117] "RemoveContainer" containerID="2d726601a06f0f3b078ac9cfab32d3c08235958370c6a2e0cae055cc410e3e0d" Jan 29 11:21:10 crc kubenswrapper[4593]: I0129 11:21:10.940874 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.051444 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.051886 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="dnsmasq-dns" containerID="cri-o://5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb" gracePeriod=10 Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.095913 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934ccdca-f1e6-43d2-af69-2efb205bf387" path="/var/lib/kubelet/pods/934ccdca-f1e6-43d2-af69-2efb205bf387/volumes" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.478647 4593 generic.go:334] "Generic (PLEG): container finished" podID="697e4dbe-9b00-4891-9456-f76cb9642401" containerID="5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb" exitCode=0 Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.478659 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerDied","Data":"5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb"} Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.485820 4593 generic.go:334] "Generic (PLEG): container finished" podID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" containerID="1ea0d35aaa814eafe90d3b552ce2cc9ecd1b47dc4d9629fa6b4ad38749d52cc1" exitCode=0 Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.485895 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4klpz" event={"ID":"39f1974c-39c2-48ab-96f4-ad9b138bdd2a","Type":"ContainerDied","Data":"1ea0d35aaa814eafe90d3b552ce2cc9ecd1b47dc4d9629fa6b4ad38749d52cc1"} Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.489671 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"f6bd3a6530e1c82fd552581b2874e176186933aaefba9d871c4f8370d018c933"} Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.577115 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720309 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720421 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720513 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720625 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720676 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.720724 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") pod \"697e4dbe-9b00-4891-9456-f76cb9642401\" (UID: \"697e4dbe-9b00-4891-9456-f76cb9642401\") " Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.726620 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw" (OuterVolumeSpecName: "kube-api-access-7ppsw") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "kube-api-access-7ppsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.793550 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.812812 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config" (OuterVolumeSpecName: "config") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.819522 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.822207 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ppsw\" (UniqueName: \"kubernetes.io/projected/697e4dbe-9b00-4891-9456-f76cb9642401-kube-api-access-7ppsw\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.822227 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.822237 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.822246 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.825129 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.840198 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "697e4dbe-9b00-4891-9456-f76cb9642401" (UID: "697e4dbe-9b00-4891-9456-f76cb9642401"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.923803 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:11 crc kubenswrapper[4593]: I0129 11:21:11.924987 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/697e4dbe-9b00-4891-9456-f76cb9642401-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.501412 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"f022985901fafb0e1edf6beb865adbec3ab446e664ba4bce07baeda349fe8f88"} Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.503420 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" event={"ID":"697e4dbe-9b00-4891-9456-f76cb9642401","Type":"ContainerDied","Data":"7eb448007e7f2f259e7551ed6226b778b13ff57e3f9a0c2ec212e1fb5e5be79a"} Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.503467 4593 scope.go:117] "RemoveContainer" containerID="5c3d893d50de695f2752e97704ce1977c263a00d43a535d7cade0a1f98508eeb" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.503436 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-bsx9x" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.534610 4593 scope.go:117] "RemoveContainer" containerID="7393be6f52eedddb8f2e44100a437ddd9c4a6aceb5605fe268b7dc5e484c61b6" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.554575 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.565225 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-bsx9x"] Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.860355 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.992707 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") pod \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.993651 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") pod \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.993776 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") pod \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " Jan 29 11:21:12 crc kubenswrapper[4593]: I0129 11:21:12.993881 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") pod \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\" (UID: \"39f1974c-39c2-48ab-96f4-ad9b138bdd2a\") " Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.000125 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn" (OuterVolumeSpecName: "kube-api-access-xgsmn") pod "39f1974c-39c2-48ab-96f4-ad9b138bdd2a" (UID: "39f1974c-39c2-48ab-96f4-ad9b138bdd2a"). InnerVolumeSpecName "kube-api-access-xgsmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.001265 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts" (OuterVolumeSpecName: "scripts") pod "39f1974c-39c2-48ab-96f4-ad9b138bdd2a" (UID: "39f1974c-39c2-48ab-96f4-ad9b138bdd2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.024320 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data" (OuterVolumeSpecName: "config-data") pod "39f1974c-39c2-48ab-96f4-ad9b138bdd2a" (UID: "39f1974c-39c2-48ab-96f4-ad9b138bdd2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.032534 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39f1974c-39c2-48ab-96f4-ad9b138bdd2a" (UID: "39f1974c-39c2-48ab-96f4-ad9b138bdd2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.091230 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" path="/var/lib/kubelet/pods/697e4dbe-9b00-4891-9456-f76cb9642401/volumes" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.101861 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.101910 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgsmn\" (UniqueName: \"kubernetes.io/projected/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-kube-api-access-xgsmn\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.101927 4593 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-scripts\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.101943 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f1974c-39c2-48ab-96f4-ad9b138bdd2a-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.518761 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"d6fe9c8cef1aaf2e257ab06d4df70f87b85fb8c00f94feac5166cf1b6dd99b4e"} Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.519555 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"de6839d22a803c3f2ec07740614bc85bfd6e56d1aa57f5f3ef20bc4f7ee3ad36"} Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.522076 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4klpz" event={"ID":"39f1974c-39c2-48ab-96f4-ad9b138bdd2a","Type":"ContainerDied","Data":"8d964d0f6fd7a3a0690290e5907b2f72debcae58f7a1f3f8fa117ebd225127d0"} Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.522212 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d964d0f6fd7a3a0690290e5907b2f72debcae58f7a1f3f8fa117ebd225127d0" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.522381 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4klpz" Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.630531 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.630804 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-log" containerID="cri-o://55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" gracePeriod=30 Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.632214 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-api" containerID="cri-o://879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" gracePeriod=30 Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.667328 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.667572 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" containerID="cri-o://f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" gracePeriod=30 Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.723066 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.723356 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" containerID="cri-o://24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" gracePeriod=30 Jan 29 11:21:13 crc kubenswrapper[4593]: I0129 11:21:13.723481 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" containerID="cri-o://cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" gracePeriod=30 Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.294823 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.433822 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.433876 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.433922 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.434041 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.434135 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.434167 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") pod \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\" (UID: \"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6\") " Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.436429 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs" (OuterVolumeSpecName: "logs") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.464420 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f" (OuterVolumeSpecName: "kube-api-access-b5g6f") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "kube-api-access-b5g6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.479517 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.509949 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data" (OuterVolumeSpecName: "config-data") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.514715 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540896 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540925 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540935 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540945 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5g6f\" (UniqueName: \"kubernetes.io/projected/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-kube-api-access-b5g6f\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.540953 4593 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.556658 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" (UID: "e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.560036 4593 generic.go:334] "Generic (PLEG): container finished" podID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerID="24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" exitCode=143 Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.561295 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerDied","Data":"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c"} Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572810 4593 generic.go:334] "Generic (PLEG): container finished" podID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" exitCode=0 Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572849 4593 generic.go:334] "Generic (PLEG): container finished" podID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" exitCode=143 Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572875 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerDied","Data":"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1"} Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572902 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerDied","Data":"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b"} Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6","Type":"ContainerDied","Data":"ae8e074c1c0c0dd530e330b0aefcc3c1e2e24788eaa38738b85e121e979bb77a"} Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.572927 4593 scope.go:117] "RemoveContainer" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.573054 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.645511 4593 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.649253 4593 scope.go:117] "RemoveContainer" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.657343 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.678832 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.691242 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.693816 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-log" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.693867 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-log" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.693902 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="dnsmasq-dns" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.693911 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="dnsmasq-dns" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.693957 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-api" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.693969 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-api" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.693985 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="init" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.693993 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="init" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.694039 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" containerName="nova-manage" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694049 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" containerName="nova-manage" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694441 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-log" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694462 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" containerName="nova-api-api" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694476 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="697e4dbe-9b00-4891-9456-f76cb9642401" containerName="dnsmasq-dns" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.694524 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" containerName="nova-manage" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.697953 4593 scope.go:117] "RemoveContainer" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.699181 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": container with ID starting with 879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1 not found: ID does not exist" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.699245 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1"} err="failed to get container status \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": rpc error: code = NotFound desc = could not find container \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": container with ID starting with 879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1 not found: ID does not exist" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.699278 4593 scope.go:117] "RemoveContainer" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.700986 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: E0129 11:21:14.704911 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": container with ID starting with 55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b not found: ID does not exist" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.704987 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b"} err="failed to get container status \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": rpc error: code = NotFound desc = could not find container \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": container with ID starting with 55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b not found: ID does not exist" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.705045 4593 scope.go:117] "RemoveContainer" containerID="879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.707454 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.707775 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1"} err="failed to get container status \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": rpc error: code = NotFound desc = could not find container \"879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1\": container with ID starting with 879a69311ad4f13daa625de126f709236f3f76866881d9eb8382a7af4abe87d1 not found: ID does not exist" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.707808 4593 scope.go:117] "RemoveContainer" containerID="55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.708078 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.708314 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b"} err="failed to get container status \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": rpc error: code = NotFound desc = could not find container \"55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b\": container with ID starting with 55e52d439584ca0153acf40b3a3953f6f00d4a00a6f8778c9a63dc288f246b4b not found: ID does not exist" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.708488 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.719301 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.747271 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.747592 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjrn\" (UniqueName: \"kubernetes.io/projected/0d08c570-1374-4c5a-832e-c973d7a39796-kube-api-access-2mjrn\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.747726 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d08c570-1374-4c5a-832e-c973d7a39796-logs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.747886 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.748000 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-config-data\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.748207 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.850009 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d08c570-1374-4c5a-832e-c973d7a39796-logs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.850471 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.850430 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d08c570-1374-4c5a-832e-c973d7a39796-logs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.851415 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-config-data\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.851857 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.852297 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.852668 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mjrn\" (UniqueName: \"kubernetes.io/projected/0d08c570-1374-4c5a-832e-c973d7a39796-kube-api-access-2mjrn\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.857147 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.857788 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.857928 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-config-data\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.860433 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d08c570-1374-4c5a-832e-c973d7a39796-public-tls-certs\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:14 crc kubenswrapper[4593]: I0129 11:21:14.877557 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mjrn\" (UniqueName: \"kubernetes.io/projected/0d08c570-1374-4c5a-832e-c973d7a39796-kube-api-access-2mjrn\") pod \"nova-api-0\" (UID: \"0d08c570-1374-4c5a-832e-c973d7a39796\") " pod="openstack/nova-api-0" Jan 29 11:21:15 crc kubenswrapper[4593]: I0129 11:21:15.031847 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 29 11:21:15 crc kubenswrapper[4593]: I0129 11:21:15.145947 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6" path="/var/lib/kubelet/pods/e880ed3e-b1e4-40f6-bd7a-45b5e0e1c2b6/volumes" Jan 29 11:21:15 crc kubenswrapper[4593]: I0129 11:21:15.601024 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.608897 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8581bb16-8d35-4521-8886-3c71554a3a4d","Type":"ContainerStarted","Data":"7a937c89fb9109b345f6f22c51e0a60188931bf44b81b647fac5bcc01cf19596"} Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.609615 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.612669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d08c570-1374-4c5a-832e-c973d7a39796","Type":"ContainerStarted","Data":"e1dc8489211673f5a24d00e649bbdc05dd87332bd16220a4800c62a5e142b3cd"} Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.612728 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d08c570-1374-4c5a-832e-c973d7a39796","Type":"ContainerStarted","Data":"4f34a69cd0ccc8a436c33666880e5411f3eb6ba4b621cea8cd63c32738c221fa"} Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.612741 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0d08c570-1374-4c5a-832e-c973d7a39796","Type":"ContainerStarted","Data":"b49cf6bde7134a1d6381169e903a4ce3cd1d72b2b35b1a183bb95ec308c2979a"} Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.643273 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.545222436 podStartE2EDuration="7.643249257s" podCreationTimestamp="2026-01-29 11:21:09 +0000 UTC" firstStartedPulling="2026-01-29 11:21:10.457790087 +0000 UTC m=+1336.330824278" lastFinishedPulling="2026-01-29 11:21:15.555816898 +0000 UTC m=+1341.428851099" observedRunningTime="2026-01-29 11:21:16.627261943 +0000 UTC m=+1342.500296144" watchObservedRunningTime="2026-01-29 11:21:16.643249257 +0000 UTC m=+1342.516283448" Jan 29 11:21:16 crc kubenswrapper[4593]: I0129 11:21:16.665801 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.665777607 podStartE2EDuration="2.665777607s" podCreationTimestamp="2026-01-29 11:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:16.656126695 +0000 UTC m=+1342.529160896" watchObservedRunningTime="2026-01-29 11:21:16.665777607 +0000 UTC m=+1342.538811798" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.218743 4593 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 is running failed: container process not found" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.221498 4593 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 is running failed: container process not found" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.221842 4593 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 is running failed: container process not found" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.221922 4593 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.343470 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.417565 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.417833 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.417913 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.418051 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.418101 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") pod \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\" (UID: \"eaa00230-26f8-4fa7-b32c-994ec82a6ac4\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.421779 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs" (OuterVolumeSpecName: "logs") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.434832 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l" (OuterVolumeSpecName: "kube-api-access-fww9l") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "kube-api-access-fww9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.500430 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data" (OuterVolumeSpecName: "config-data") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.523470 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.523500 4593 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-logs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.523509 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fww9l\" (UniqueName: \"kubernetes.io/projected/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-kube-api-access-fww9l\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.560839 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.573185 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "eaa00230-26f8-4fa7-b32c-994ec82a6ac4" (UID: "eaa00230-26f8-4fa7-b32c-994ec82a6ac4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.622484 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.625418 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.625447 4593 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/eaa00230-26f8-4fa7-b32c-994ec82a6ac4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633461 4593 generic.go:334] "Generic (PLEG): container finished" podID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" exitCode=0 Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633533 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40dd43f0-0621-4358-8019-b58cd5fbcc79","Type":"ContainerDied","Data":"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058"} Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633568 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40dd43f0-0621-4358-8019-b58cd5fbcc79","Type":"ContainerDied","Data":"c94ac2729f1f8331d111e95fa7df8974b6fcb7da88f692f7369227d26b750286"} Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633592 4593 scope.go:117] "RemoveContainer" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.633722 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.648436 4593 generic.go:334] "Generic (PLEG): container finished" podID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerID="cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" exitCode=0 Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.648720 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.648793 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerDied","Data":"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df"} Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.648818 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eaa00230-26f8-4fa7-b32c-994ec82a6ac4","Type":"ContainerDied","Data":"185a6935f58efd39bffafb91700164ea93f85ee3879bc888a2a51ac02343ec6a"} Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.676386 4593 scope.go:117] "RemoveContainer" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.677014 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058\": container with ID starting with f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 not found: ID does not exist" containerID="f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.677061 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058"} err="failed to get container status \"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058\": rpc error: code = NotFound desc = could not find container \"f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058\": container with ID starting with f16071a895ef62e3b2991f0d721fed2c818f2dd3d4c4185d21b603f6f1de6058 not found: ID does not exist" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.677089 4593 scope.go:117] "RemoveContainer" containerID="cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.709837 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.724107 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.726699 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") pod \"40dd43f0-0621-4358-8019-b58cd5fbcc79\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.726875 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") pod \"40dd43f0-0621-4358-8019-b58cd5fbcc79\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.726923 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") pod \"40dd43f0-0621-4358-8019-b58cd5fbcc79\" (UID: \"40dd43f0-0621-4358-8019-b58cd5fbcc79\") " Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.738323 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m" (OuterVolumeSpecName: "kube-api-access-kpt5m") pod "40dd43f0-0621-4358-8019-b58cd5fbcc79" (UID: "40dd43f0-0621-4358-8019-b58cd5fbcc79"). InnerVolumeSpecName "kube-api-access-kpt5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.742194 4593 scope.go:117] "RemoveContainer" containerID="24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743047 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.743438 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743474 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.743494 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743501 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.743529 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743552 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743756 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" containerName="nova-scheduler-scheduler" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743792 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-metadata" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.743808 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" containerName="nova-metadata-log" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.751547 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.758438 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.759078 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.763846 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.788838 4593 scope.go:117] "RemoveContainer" containerID="cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.789581 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df\": container with ID starting with cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df not found: ID does not exist" containerID="cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.789613 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df"} err="failed to get container status \"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df\": rpc error: code = NotFound desc = could not find container \"cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df\": container with ID starting with cf41dd0fb5a7b655b2dfa2beee5825aad4a8df4c8f985e8aebe9c425662911df not found: ID does not exist" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.789657 4593 scope.go:117] "RemoveContainer" containerID="24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" Jan 29 11:21:17 crc kubenswrapper[4593]: E0129 11:21:17.790058 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c\": container with ID starting with 24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c not found: ID does not exist" containerID="24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.790076 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c"} err="failed to get container status \"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c\": rpc error: code = NotFound desc = could not find container \"24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c\": container with ID starting with 24c6fe2689133cb0ec4931234ff5577d826f6e6f68c542687334c8d0dfe09c4c not found: ID does not exist" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.798916 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data" (OuterVolumeSpecName: "config-data") pod "40dd43f0-0621-4358-8019-b58cd5fbcc79" (UID: "40dd43f0-0621-4358-8019-b58cd5fbcc79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829399 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ph92\" (UniqueName: \"kubernetes.io/projected/649faf5c-e6bb-4e3d-8cb5-28c57f100008-kube-api-access-8ph92\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829478 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829517 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649faf5c-e6bb-4e3d-8cb5-28c57f100008-logs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829590 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829613 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-config-data\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829682 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpt5m\" (UniqueName: \"kubernetes.io/projected/40dd43f0-0621-4358-8019-b58cd5fbcc79-kube-api-access-kpt5m\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.829694 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.841829 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40dd43f0-0621-4358-8019-b58cd5fbcc79" (UID: "40dd43f0-0621-4358-8019-b58cd5fbcc79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.930932 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ph92\" (UniqueName: \"kubernetes.io/projected/649faf5c-e6bb-4e3d-8cb5-28c57f100008-kube-api-access-8ph92\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.931499 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.931656 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649faf5c-e6bb-4e3d-8cb5-28c57f100008-logs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.931808 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.931887 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-config-data\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.932007 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40dd43f0-0621-4358-8019-b58cd5fbcc79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.932556 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/649faf5c-e6bb-4e3d-8cb5-28c57f100008-logs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.935669 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-config-data\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.936131 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.937115 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/649faf5c-e6bb-4e3d-8cb5-28c57f100008-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:17 crc kubenswrapper[4593]: I0129 11:21:17.965234 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ph92\" (UniqueName: \"kubernetes.io/projected/649faf5c-e6bb-4e3d-8cb5-28c57f100008-kube-api-access-8ph92\") pod \"nova-metadata-0\" (UID: \"649faf5c-e6bb-4e3d-8cb5-28c57f100008\") " pod="openstack/nova-metadata-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.089669 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.098681 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.099822 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.114332 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.119411 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.124033 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.128496 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.238135 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm9ll\" (UniqueName: \"kubernetes.io/projected/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-kube-api-access-xm9ll\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.238503 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.238661 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-config-data\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.340842 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-config-data\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.340954 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm9ll\" (UniqueName: \"kubernetes.io/projected/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-kube-api-access-xm9ll\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.340982 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.346978 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.347159 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-config-data\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.365960 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm9ll\" (UniqueName: \"kubernetes.io/projected/4eff0b9f-e2c4-4ae0-9b42-585f9141d740-kube-api-access-xm9ll\") pod \"nova-scheduler-0\" (UID: \"4eff0b9f-e2c4-4ae0-9b42-585f9141d740\") " pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.531726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.576466 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 29 11:21:18 crc kubenswrapper[4593]: I0129 11:21:18.665597 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"649faf5c-e6bb-4e3d-8cb5-28c57f100008","Type":"ContainerStarted","Data":"21aef2da5eea28bdb4c686b164d4d33176d54adf3d3ed82af36c2ade08a857ca"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.012927 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 29 11:21:19 crc kubenswrapper[4593]: W0129 11:21:19.016500 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eff0b9f_e2c4_4ae0_9b42_585f9141d740.slice/crio-f4a523e045a2d45ac99d3f668d9667fd6319543b192cb872e4b9d66b1491015a WatchSource:0}: Error finding container f4a523e045a2d45ac99d3f668d9667fd6319543b192cb872e4b9d66b1491015a: Status 404 returned error can't find the container with id f4a523e045a2d45ac99d3f668d9667fd6319543b192cb872e4b9d66b1491015a Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.088497 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40dd43f0-0621-4358-8019-b58cd5fbcc79" path="/var/lib/kubelet/pods/40dd43f0-0621-4358-8019-b58cd5fbcc79/volumes" Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.089784 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa00230-26f8-4fa7-b32c-994ec82a6ac4" path="/var/lib/kubelet/pods/eaa00230-26f8-4fa7-b32c-994ec82a6ac4/volumes" Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.678164 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"649faf5c-e6bb-4e3d-8cb5-28c57f100008","Type":"ContainerStarted","Data":"4d773d722a5618f7389efdb82ee16c253498b2a7d6513aa8ff6b7f987f512d54"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.678216 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"649faf5c-e6bb-4e3d-8cb5-28c57f100008","Type":"ContainerStarted","Data":"9f84d2fca65e709b5b83135138cd04706d337eba2d35b53a555fb6a431ad8831"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.680267 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4eff0b9f-e2c4-4ae0-9b42-585f9141d740","Type":"ContainerStarted","Data":"dfbb4a0969380b9fadf88a508ec4f02f949105466a18e19478177689ef066784"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.680312 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4eff0b9f-e2c4-4ae0-9b42-585f9141d740","Type":"ContainerStarted","Data":"f4a523e045a2d45ac99d3f668d9667fd6319543b192cb872e4b9d66b1491015a"} Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.722850 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.722833169 podStartE2EDuration="1.722833169s" podCreationTimestamp="2026-01-29 11:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:19.718879202 +0000 UTC m=+1345.591913393" watchObservedRunningTime="2026-01-29 11:21:19.722833169 +0000 UTC m=+1345.595867360" Jan 29 11:21:19 crc kubenswrapper[4593]: I0129 11:21:19.723951 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.723944849 podStartE2EDuration="2.723944849s" podCreationTimestamp="2026-01-29 11:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:21:19.703840724 +0000 UTC m=+1345.576874925" watchObservedRunningTime="2026-01-29 11:21:19.723944849 +0000 UTC m=+1345.596979040" Jan 29 11:21:23 crc kubenswrapper[4593]: I0129 11:21:23.100140 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:21:23 crc kubenswrapper[4593]: I0129 11:21:23.101739 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 29 11:21:23 crc kubenswrapper[4593]: I0129 11:21:23.532012 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 29 11:21:25 crc kubenswrapper[4593]: I0129 11:21:25.032809 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:21:25 crc kubenswrapper[4593]: I0129 11:21:25.032881 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 29 11:21:26 crc kubenswrapper[4593]: I0129 11:21:26.052007 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d08c570-1374-4c5a-832e-c973d7a39796" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:26 crc kubenswrapper[4593]: I0129 11:21:26.052030 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0d08c570-1374-4c5a-832e-c973d7a39796" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.206:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.100271 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.100665 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.531953 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.571653 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 29 11:21:28 crc kubenswrapper[4593]: I0129 11:21:28.792868 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 29 11:21:29 crc kubenswrapper[4593]: I0129 11:21:29.114809 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="649faf5c-e6bb-4e3d-8cb5-28c57f100008" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:29 crc kubenswrapper[4593]: I0129 11:21:29.114841 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="649faf5c-e6bb-4e3d-8cb5-28c57f100008" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.043446 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.044393 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.050232 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.054491 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.825868 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 29 11:21:35 crc kubenswrapper[4593]: I0129 11:21:35.832934 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 29 11:21:38 crc kubenswrapper[4593]: I0129 11:21:38.189187 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:21:38 crc kubenswrapper[4593]: I0129 11:21:38.199463 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 29 11:21:38 crc kubenswrapper[4593]: I0129 11:21:38.200022 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:21:38 crc kubenswrapper[4593]: I0129 11:21:38.866368 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 29 11:21:39 crc kubenswrapper[4593]: I0129 11:21:39.938489 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 29 11:21:49 crc kubenswrapper[4593]: I0129 11:21:49.663532 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:21:50 crc kubenswrapper[4593]: I0129 11:21:50.882155 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:21:55 crc kubenswrapper[4593]: I0129 11:21:55.452099 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" containerID="cri-o://b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0" gracePeriod=604795 Jan 29 11:21:55 crc kubenswrapper[4593]: I0129 11:21:55.816212 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.94:5671: connect: connection refused" Jan 29 11:21:55 crc kubenswrapper[4593]: I0129 11:21:55.947081 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" containerID="cri-o://a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" gracePeriod=604795 Jan 29 11:21:56 crc kubenswrapper[4593]: I0129 11:21:56.259923 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.093072 4593 generic.go:334] "Generic (PLEG): container finished" podID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerID="b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0" exitCode=0 Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.093556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerDied","Data":"b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0"} Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.279770 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309199 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309292 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309391 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309406 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309430 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309462 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309480 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309523 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309608 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309649 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.309673 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") pod \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\" (UID: \"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.310016 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.310426 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.317268 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.339059 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info" (OuterVolumeSpecName: "pod-info") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.366018 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f" (OuterVolumeSpecName: "kube-api-access-5gt4f") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "kube-api-access-5gt4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.374672 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.374900 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.385012 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433180 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433211 4593 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433221 4593 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433234 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433245 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433271 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433279 4593 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.433288 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gt4f\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-kube-api-access-5gt4f\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.467984 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data" (OuterVolumeSpecName: "config-data") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.475312 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf" (OuterVolumeSpecName: "server-conf") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.536849 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.536895 4593 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.537509 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.598094 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.622601 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" (UID: "f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.642053 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.642101 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.743043 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.743848 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744541 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744656 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744680 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744703 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744729 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744817 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744846 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.744880 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") pod \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\" (UID: \"db2ccd2b-429d-43e8-a674-fb5c2abb0754\") " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.745761 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.746231 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.748474 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.749716 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751146 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751176 4593 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751190 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751206 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.751218 4593 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2ccd2b-429d-43e8-a674-fb5c2abb0754-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.752044 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq" (OuterVolumeSpecName: "kube-api-access-6pmxq") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "kube-api-access-6pmxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.763064 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info" (OuterVolumeSpecName: "pod-info") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.780908 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.808129 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data" (OuterVolumeSpecName: "config-data") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.855127 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.855176 4593 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2ccd2b-429d-43e8-a674-fb5c2abb0754-pod-info\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.855212 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.855227 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pmxq\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-kube-api-access-6pmxq\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.897764 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:02 crc kubenswrapper[4593]: E0129 11:22:02.898193 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898211 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: E0129 11:22:02.898228 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898235 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: E0129 11:22:02.898257 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="setup-container" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898263 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="setup-container" Jan 29 11:22:02 crc kubenswrapper[4593]: E0129 11:22:02.898276 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="setup-container" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898283 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="setup-container" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898462 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.898482 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" containerName="rabbitmq" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.899483 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.902328 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.907350 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.962447 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.962553 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:02 crc kubenswrapper[4593]: I0129 11:22:02.995307 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf" (OuterVolumeSpecName: "server-conf") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.064931 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.064975 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065034 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065055 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065117 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065151 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065232 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065289 4593 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2ccd2b-429d-43e8-a674-fb5c2abb0754-server-conf\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.065414 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "db2ccd2b-429d-43e8-a674-fb5c2abb0754" (UID: "db2ccd2b-429d-43e8-a674-fb5c2abb0754"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112007 4593 generic.go:334] "Generic (PLEG): container finished" podID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" containerID="a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" exitCode=0 Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112050 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112091 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerDied","Data":"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112"} Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112123 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2ccd2b-429d-43e8-a674-fb5c2abb0754","Type":"ContainerDied","Data":"5a494b5365040c8bc0ddefc581e932c4375131be0145147547aba83d5a596b24"} Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.112142 4593 scope.go:117] "RemoveContainer" containerID="a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.118615 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e","Type":"ContainerDied","Data":"5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090"} Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.118668 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.157887 4593 scope.go:117] "RemoveContainer" containerID="6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169667 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169718 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169761 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169782 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169841 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169874 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169906 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.169955 4593 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2ccd2b-429d-43e8-a674-fb5c2abb0754-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.170740 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.171141 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.171538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.171720 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.172104 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.172301 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.188407 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.211200 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.216520 4593 scope.go:117] "RemoveContainer" containerID="a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" Jan 29 11:22:03 crc kubenswrapper[4593]: E0129 11:22:03.217446 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112\": container with ID starting with a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112 not found: ID does not exist" containerID="a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.217479 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112"} err="failed to get container status \"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112\": rpc error: code = NotFound desc = could not find container \"a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112\": container with ID starting with a5f4f1ce8f769804b224118a6ef670e7ab165b034ee99bc6126f73ead60da112 not found: ID does not exist" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.217501 4593 scope.go:117] "RemoveContainer" containerID="6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f" Jan 29 11:22:03 crc kubenswrapper[4593]: E0129 11:22:03.221282 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f\": container with ID starting with 6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f not found: ID does not exist" containerID="6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.221325 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f"} err="failed to get container status \"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f\": rpc error: code = NotFound desc = could not find container \"6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f\": container with ID starting with 6d261168add925568a421f585a6004956179df4396d9af74a221541b8db2b16f not found: ID does not exist" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.221346 4593 scope.go:117] "RemoveContainer" containerID="b4905f54e6b8f178fee9edd7eecf274cac9966dfb2e310545422ab1ab6e185c0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.225209 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") pod \"dnsmasq-dns-d558885bc-vm2qn\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.266859 4593 scope.go:117] "RemoveContainer" containerID="44978dbad6338f76a863bda910ccc44233b86b74e07d252f43136dd31d7cd624" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.271546 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.311116 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.340890 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.351225 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.355714 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: E0129 11:22:03.359443 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb2ccd2b_429d_43e8_a674_fb5c2abb0754.slice/crio-5a494b5365040c8bc0ddefc581e932c4375131be0145147547aba83d5a596b24\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f6d0a4_2543_4de8_a64e_f3ce4c2bb11e.slice/crio-5d7fdf36d82144d193388373adf2f7188be08e39ae09d760625349b240578090\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0f6d0a4_2543_4de8_a64e_f3ce4c2bb11e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb2ccd2b_429d_43e8_a674_fb5c2abb0754.slice\": RecentStats: unable to find data in memory cache]" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.363969 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.364105 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.364217 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.364411 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.364455 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.365186 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ck876" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.365289 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.388917 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.390458 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.393212 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.393906 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ztnqn" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.394196 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.394365 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.394483 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.397318 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.397427 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.401710 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.414117 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481620 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481676 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cqm9\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-kube-api-access-9cqm9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481706 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481724 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481756 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481782 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481812 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481850 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66e64ba6-3c75-4430-9f03-0fe9dbb37459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481866 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63184534-fd04-4ef9-9c56-de6c30745ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481894 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481924 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481940 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66e64ba6-3c75-4430-9f03-0fe9dbb37459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481967 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.481989 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482008 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482031 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63184534-fd04-4ef9-9c56-de6c30745ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482046 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482066 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdq4\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-kube-api-access-hwdq4\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482089 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482104 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.482135 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583548 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583586 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583619 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583662 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583693 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583709 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583728 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66e64ba6-3c75-4430-9f03-0fe9dbb37459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583744 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63184534-fd04-4ef9-9c56-de6c30745ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583769 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583797 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583813 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66e64ba6-3c75-4430-9f03-0fe9dbb37459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583841 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583861 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583897 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63184534-fd04-4ef9-9c56-de6c30745ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583918 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583940 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdq4\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-kube-api-access-hwdq4\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583962 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.583975 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.584005 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.584024 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.584040 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cqm9\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-kube-api-access-9cqm9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.585329 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.588670 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.589397 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.590056 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.591319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.591617 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.598388 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.598703 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.599342 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/66e64ba6-3c75-4430-9f03-0fe9dbb37459-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.599448 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.619734 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.620428 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.620818 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.628307 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.630200 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.630536 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/66e64ba6-3c75-4430-9f03-0fe9dbb37459-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.632489 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63184534-fd04-4ef9-9c56-de6c30745ec4-config-data\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.647538 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cqm9\" (UniqueName: \"kubernetes.io/projected/66e64ba6-3c75-4430-9f03-0fe9dbb37459-kube-api-access-9cqm9\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.648317 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/66e64ba6-3c75-4430-9f03-0fe9dbb37459-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.649828 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63184534-fd04-4ef9-9c56-de6c30745ec4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.705600 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdq4\" (UniqueName: \"kubernetes.io/projected/63184534-fd04-4ef9-9c56-de6c30745ec4-kube-api-access-hwdq4\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.721351 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63184534-fd04-4ef9-9c56-de6c30745ec4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.788544 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"66e64ba6-3c75-4430-9f03-0fe9dbb37459\") " pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.798442 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"63184534-fd04-4ef9-9c56-de6c30745ec4\") " pod="openstack/rabbitmq-server-0" Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.902162 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:03 crc kubenswrapper[4593]: I0129 11:22:03.994174 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 29 11:22:04 crc kubenswrapper[4593]: I0129 11:22:04.018015 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:04 crc kubenswrapper[4593]: I0129 11:22:04.141424 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerStarted","Data":"b8c6914ce6bbd8622ddb4421f17355f5778b3203bfad364b74e640dad724f7dd"} Jan 29 11:22:04 crc kubenswrapper[4593]: I0129 11:22:04.668168 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 29 11:22:04 crc kubenswrapper[4593]: I0129 11:22:04.771323 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.086739 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db2ccd2b-429d-43e8-a674-fb5c2abb0754" path="/var/lib/kubelet/pods/db2ccd2b-429d-43e8-a674-fb5c2abb0754/volumes" Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.088033 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e" path="/var/lib/kubelet/pods/f0f6d0a4-2543-4de8-a64e-f3ce4c2bb11e/volumes" Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.155116 4593 generic.go:334] "Generic (PLEG): container finished" podID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerID="e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa" exitCode=0 Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.156331 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerDied","Data":"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa"} Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.161400 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63184534-fd04-4ef9-9c56-de6c30745ec4","Type":"ContainerStarted","Data":"91850b17c124d531934cd1d41292f78eceeecb5b1f93cdd3527be41eabefdc07"} Jan 29 11:22:05 crc kubenswrapper[4593]: I0129 11:22:05.164379 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66e64ba6-3c75-4430-9f03-0fe9dbb37459","Type":"ContainerStarted","Data":"6b81389095008434927f0697d4d4568ed6334b5826b58593df7a630a1f127e84"} Jan 29 11:22:06 crc kubenswrapper[4593]: I0129 11:22:06.177434 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerStarted","Data":"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5"} Jan 29 11:22:06 crc kubenswrapper[4593]: I0129 11:22:06.177876 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:06 crc kubenswrapper[4593]: I0129 11:22:06.212025 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" podStartSLOduration=4.211999616 podStartE2EDuration="4.211999616s" podCreationTimestamp="2026-01-29 11:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:22:06.204909014 +0000 UTC m=+1392.077943205" watchObservedRunningTime="2026-01-29 11:22:06.211999616 +0000 UTC m=+1392.085033807" Jan 29 11:22:07 crc kubenswrapper[4593]: I0129 11:22:07.190429 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66e64ba6-3c75-4430-9f03-0fe9dbb37459","Type":"ContainerStarted","Data":"fb5f6e8b858298de266fd1d35275745d1ef5ea779cdb71d6a175383173b07d5f"} Jan 29 11:22:07 crc kubenswrapper[4593]: I0129 11:22:07.193114 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63184534-fd04-4ef9-9c56-de6c30745ec4","Type":"ContainerStarted","Data":"d31cda1918e987444533908c599c296c91f9ed31f8f512c214c26df676d4fcdc"} Jan 29 11:22:11 crc kubenswrapper[4593]: I0129 11:22:11.410307 4593 scope.go:117] "RemoveContainer" containerID="d6a963ebfb97713a0a7f5c7f7df33e57f221e22a4c463e45ec8292bcb918f3d4" Jan 29 11:22:11 crc kubenswrapper[4593]: I0129 11:22:11.445107 4593 scope.go:117] "RemoveContainer" containerID="bb01aea62e7547286b44d9743a913549a411ace53ed9b60fd827a2aca107007a" Jan 29 11:22:11 crc kubenswrapper[4593]: I0129 11:22:11.501731 4593 scope.go:117] "RemoveContainer" containerID="b731ce61732546e5002e6093b39d4676cefa4ead9d8427f5427a357a3a10832e" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.343875 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.436303 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.438916 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="dnsmasq-dns" containerID="cri-o://479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" gracePeriod=10 Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.681675 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-mqmln"] Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.683527 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.784004 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-mqmln"] Jan 29 11:22:13 crc kubenswrapper[4593]: E0129 11:22:13.789845 4593 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4645d9f_a4ac_4004_b76e_8f3652a300e6.slice/crio-conmon-479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d.scope\": RecentStats: unable to find data in memory cache]" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834625 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-config\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834707 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmkc\" (UniqueName: \"kubernetes.io/projected/07012c75-f2fe-400a-b511-d0cc18a1ca9c-kube-api-access-xxmkc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834740 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834758 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834803 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834861 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.834882 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.939714 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-config\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.939849 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmkc\" (UniqueName: \"kubernetes.io/projected/07012c75-f2fe-400a-b511-d0cc18a1ca9c-kube-api-access-xxmkc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.939962 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.939995 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.940064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.940155 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.940186 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.941178 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-swift-storage-0\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.941746 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-config\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.941758 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-nb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.942270 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-openstack-edpm-ipam\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.942799 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-dns-svc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.943091 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/07012c75-f2fe-400a-b511-d0cc18a1ca9c-ovsdbserver-sb\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:13 crc kubenswrapper[4593]: I0129 11:22:13.963079 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmkc\" (UniqueName: \"kubernetes.io/projected/07012c75-f2fe-400a-b511-d0cc18a1ca9c-kube-api-access-xxmkc\") pod \"dnsmasq-dns-67cb876dc9-mqmln\" (UID: \"07012c75-f2fe-400a-b511-d0cc18a1ca9c\") " pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.087137 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.114305 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.254596 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.254762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.254863 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.254886 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.255355 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.255394 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") pod \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\" (UID: \"d4645d9f-a4ac-4004-b76e-8f3652a300e6\") " Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.264747 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv" (OuterVolumeSpecName: "kube-api-access-lkqvv") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "kube-api-access-lkqvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.282979 4593 generic.go:334] "Generic (PLEG): container finished" podID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerID="479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" exitCode=0 Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.283029 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerDied","Data":"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d"} Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.283065 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" event={"ID":"d4645d9f-a4ac-4004-b76e-8f3652a300e6","Type":"ContainerDied","Data":"c6f1f6dc4fba44b238c92a14ad6df982c542f3af9ec19723b99a766da8d106d2"} Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.283124 4593 scope.go:117] "RemoveContainer" containerID="479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.283314 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-q9gws" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.321885 4593 scope.go:117] "RemoveContainer" containerID="96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.356296 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.362934 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.362959 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkqvv\" (UniqueName: \"kubernetes.io/projected/d4645d9f-a4ac-4004-b76e-8f3652a300e6-kube-api-access-lkqvv\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.375080 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.402791 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.410660 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.430915 4593 scope.go:117] "RemoveContainer" containerID="479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" Jan 29 11:22:14 crc kubenswrapper[4593]: E0129 11:22:14.431432 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d\": container with ID starting with 479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d not found: ID does not exist" containerID="479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.431459 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d"} err="failed to get container status \"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d\": rpc error: code = NotFound desc = could not find container \"479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d\": container with ID starting with 479233623cb8278cfb48210dd03d033cd40365c2fb3ed9d12d89ee82e355d19d not found: ID does not exist" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.431484 4593 scope.go:117] "RemoveContainer" containerID="96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab" Jan 29 11:22:14 crc kubenswrapper[4593]: E0129 11:22:14.434885 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab\": container with ID starting with 96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab not found: ID does not exist" containerID="96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.434913 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab"} err="failed to get container status \"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab\": rpc error: code = NotFound desc = could not find container \"96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab\": container with ID starting with 96f4460809918886f218fdb0369ac16533266e781abac3ab2236acb263eb30ab not found: ID does not exist" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.451291 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config" (OuterVolumeSpecName: "config") pod "d4645d9f-a4ac-4004-b76e-8f3652a300e6" (UID: "d4645d9f-a4ac-4004-b76e-8f3652a300e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.472953 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.472999 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.473014 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.473027 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4645d9f-a4ac-4004-b76e-8f3652a300e6-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.639435 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.648522 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-q9gws"] Jan 29 11:22:14 crc kubenswrapper[4593]: I0129 11:22:14.668055 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cb876dc9-mqmln"] Jan 29 11:22:15 crc kubenswrapper[4593]: I0129 11:22:15.128595 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" path="/var/lib/kubelet/pods/d4645d9f-a4ac-4004-b76e-8f3652a300e6/volumes" Jan 29 11:22:15 crc kubenswrapper[4593]: I0129 11:22:15.292813 4593 generic.go:334] "Generic (PLEG): container finished" podID="07012c75-f2fe-400a-b511-d0cc18a1ca9c" containerID="966fbd7555bc4ff5cc929848b271c330469b2a65aade2cef4295d87e832c1a5a" exitCode=0 Jan 29 11:22:15 crc kubenswrapper[4593]: I0129 11:22:15.292861 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" event={"ID":"07012c75-f2fe-400a-b511-d0cc18a1ca9c","Type":"ContainerDied","Data":"966fbd7555bc4ff5cc929848b271c330469b2a65aade2cef4295d87e832c1a5a"} Jan 29 11:22:15 crc kubenswrapper[4593]: I0129 11:22:15.292905 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" event={"ID":"07012c75-f2fe-400a-b511-d0cc18a1ca9c","Type":"ContainerStarted","Data":"7a2fc4545d35d33c6e744dd171c7d20cf3bb835be3ee07db4caa68cdffd9347f"} Jan 29 11:22:16 crc kubenswrapper[4593]: I0129 11:22:16.307738 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" event={"ID":"07012c75-f2fe-400a-b511-d0cc18a1ca9c","Type":"ContainerStarted","Data":"eb30b54d4e438ba3a2e833ecaf77af7d70e8dedd0442a5914574f9e50d781c6e"} Jan 29 11:22:16 crc kubenswrapper[4593]: I0129 11:22:16.308126 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:16 crc kubenswrapper[4593]: I0129 11:22:16.332973 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" podStartSLOduration=3.332941761 podStartE2EDuration="3.332941761s" podCreationTimestamp="2026-01-29 11:22:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:22:16.32852188 +0000 UTC m=+1402.201556071" watchObservedRunningTime="2026-01-29 11:22:16.332941761 +0000 UTC m=+1402.205975952" Jan 29 11:22:24 crc kubenswrapper[4593]: I0129 11:22:24.087776 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67cb876dc9-mqmln" Jan 29 11:22:24 crc kubenswrapper[4593]: I0129 11:22:24.228820 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:24 crc kubenswrapper[4593]: I0129 11:22:24.229112 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="dnsmasq-dns" containerID="cri-o://d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" gracePeriod=10 Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.208015 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303465 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303539 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303758 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303819 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303926 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.303968 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") pod \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\" (UID: \"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071\") " Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.325495 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc" (OuterVolumeSpecName: "kube-api-access-gn8nc") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "kube-api-access-gn8nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.368028 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.377317 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.380256 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.395985 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.396474 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config" (OuterVolumeSpecName: "config") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406123 4593 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406162 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn8nc\" (UniqueName: \"kubernetes.io/projected/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-kube-api-access-gn8nc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406174 4593 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406183 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406192 4593 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-config\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.406200 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408613 4593 generic.go:334] "Generic (PLEG): container finished" podID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerID="d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" exitCode=0 Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerDied","Data":"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5"} Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408869 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" event={"ID":"a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071","Type":"ContainerDied","Data":"b8c6914ce6bbd8622ddb4421f17355f5778b3203bfad364b74e640dad724f7dd"} Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408934 4593 scope.go:117] "RemoveContainer" containerID="d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.408738 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-vm2qn" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.418088 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" (UID: "a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.508328 4593 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.514397 4593 scope.go:117] "RemoveContainer" containerID="e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.543850 4593 scope.go:117] "RemoveContainer" containerID="d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" Jan 29 11:22:25 crc kubenswrapper[4593]: E0129 11:22:25.544466 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5\": container with ID starting with d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5 not found: ID does not exist" containerID="d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.544535 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5"} err="failed to get container status \"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5\": rpc error: code = NotFound desc = could not find container \"d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5\": container with ID starting with d1b73be0194f7d001c2cbe9fbfefe9f7cd9bcd2022a016195305d71e903be0a5 not found: ID does not exist" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.544571 4593 scope.go:117] "RemoveContainer" containerID="e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa" Jan 29 11:22:25 crc kubenswrapper[4593]: E0129 11:22:25.545172 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa\": container with ID starting with e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa not found: ID does not exist" containerID="e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.545315 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa"} err="failed to get container status \"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa\": rpc error: code = NotFound desc = could not find container \"e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa\": container with ID starting with e722ce6843d516c6551831ac498dbd3bde5a4e0e97f571602928d818ca9dafaa not found: ID does not exist" Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.753522 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:25 crc kubenswrapper[4593]: I0129 11:22:25.763503 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-vm2qn"] Jan 29 11:22:27 crc kubenswrapper[4593]: I0129 11:22:27.087404 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" path="/var/lib/kubelet/pods/a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071/volumes" Jan 29 11:22:33 crc kubenswrapper[4593]: I0129 11:22:33.946417 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:22:33 crc kubenswrapper[4593]: I0129 11:22:33.947137 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:22:38 crc kubenswrapper[4593]: I0129 11:22:38.577240 4593 generic.go:334] "Generic (PLEG): container finished" podID="66e64ba6-3c75-4430-9f03-0fe9dbb37459" containerID="fb5f6e8b858298de266fd1d35275745d1ef5ea779cdb71d6a175383173b07d5f" exitCode=0 Jan 29 11:22:38 crc kubenswrapper[4593]: I0129 11:22:38.577844 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66e64ba6-3c75-4430-9f03-0fe9dbb37459","Type":"ContainerDied","Data":"fb5f6e8b858298de266fd1d35275745d1ef5ea779cdb71d6a175383173b07d5f"} Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.588703 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"66e64ba6-3c75-4430-9f03-0fe9dbb37459","Type":"ContainerStarted","Data":"f0c1716909775e83461a904751462ca67b2b58527ce2987524c74d21fd94fd70"} Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.589192 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.596134 4593 generic.go:334] "Generic (PLEG): container finished" podID="63184534-fd04-4ef9-9c56-de6c30745ec4" containerID="d31cda1918e987444533908c599c296c91f9ed31f8f512c214c26df676d4fcdc" exitCode=0 Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.596204 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63184534-fd04-4ef9-9c56-de6c30745ec4","Type":"ContainerDied","Data":"d31cda1918e987444533908c599c296c91f9ed31f8f512c214c26df676d4fcdc"} Jan 29 11:22:39 crc kubenswrapper[4593]: I0129 11:22:39.638271 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.638246018 podStartE2EDuration="36.638246018s" podCreationTimestamp="2026-01-29 11:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:22:39.633390366 +0000 UTC m=+1425.506424557" watchObservedRunningTime="2026-01-29 11:22:39.638246018 +0000 UTC m=+1425.511280209" Jan 29 11:22:40 crc kubenswrapper[4593]: I0129 11:22:40.608418 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"63184534-fd04-4ef9-9c56-de6c30745ec4","Type":"ContainerStarted","Data":"cede0cad0a000e524418d7a0cf0912537e7953c668c7ccbdb10f2a56ce41c175"} Jan 29 11:22:40 crc kubenswrapper[4593]: I0129 11:22:40.609590 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 29 11:22:40 crc kubenswrapper[4593]: I0129 11:22:40.643676 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.643628352 podStartE2EDuration="37.643628352s" podCreationTimestamp="2026-01-29 11:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 11:22:40.636678084 +0000 UTC m=+1426.509712285" watchObservedRunningTime="2026-01-29 11:22:40.643628352 +0000 UTC m=+1426.516662543" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.032797 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb"] Jan 29 11:22:47 crc kubenswrapper[4593]: E0129 11:22:47.033574 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033587 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: E0129 11:22:47.033597 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="init" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033603 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="init" Jan 29 11:22:47 crc kubenswrapper[4593]: E0129 11:22:47.033617 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="init" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033623 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="init" Jan 29 11:22:47 crc kubenswrapper[4593]: E0129 11:22:47.033651 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033659 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033842 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4b7ac5b-1630-4a0f-9c9a-dfacaaa56071" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.033862 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4645d9f-a4ac-4004-b76e-8f3652a300e6" containerName="dnsmasq-dns" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.034430 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.037618 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.038354 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.038386 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.041899 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.057898 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.058197 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.058285 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.058382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.063313 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb"] Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.160170 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.160259 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.160285 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.160301 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.166050 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.167292 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.182504 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.183134 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:47 crc kubenswrapper[4593]: I0129 11:22:47.355987 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:22:48 crc kubenswrapper[4593]: I0129 11:22:48.193981 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb"] Jan 29 11:22:48 crc kubenswrapper[4593]: W0129 11:22:48.209034 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3e4e3e3_1994_40a5_bab8_d84db2f44ddb.slice/crio-4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911 WatchSource:0}: Error finding container 4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911: Status 404 returned error can't find the container with id 4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911 Jan 29 11:22:48 crc kubenswrapper[4593]: I0129 11:22:48.695411 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" event={"ID":"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb","Type":"ContainerStarted","Data":"4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911"} Jan 29 11:22:53 crc kubenswrapper[4593]: I0129 11:22:53.998006 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 29 11:22:54 crc kubenswrapper[4593]: I0129 11:22:54.043393 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 29 11:23:00 crc kubenswrapper[4593]: I0129 11:23:00.066505 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:23:00 crc kubenswrapper[4593]: I0129 11:23:00.838962 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" event={"ID":"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb","Type":"ContainerStarted","Data":"b4256d122a9578d2ec330718f5347f9fbc13135f7a1bbc8107ea8d0b808b7e74"} Jan 29 11:23:00 crc kubenswrapper[4593]: I0129 11:23:00.864298 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" podStartSLOduration=2.01100882 podStartE2EDuration="13.864261808s" podCreationTimestamp="2026-01-29 11:22:47 +0000 UTC" firstStartedPulling="2026-01-29 11:22:48.211075209 +0000 UTC m=+1434.084109400" lastFinishedPulling="2026-01-29 11:23:00.064328197 +0000 UTC m=+1445.937362388" observedRunningTime="2026-01-29 11:23:00.854785202 +0000 UTC m=+1446.727819403" watchObservedRunningTime="2026-01-29 11:23:00.864261808 +0000 UTC m=+1446.737295999" Jan 29 11:23:03 crc kubenswrapper[4593]: I0129 11:23:03.946571 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:23:03 crc kubenswrapper[4593]: I0129 11:23:03.947233 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:23:15 crc kubenswrapper[4593]: I0129 11:23:15.995410 4593 generic.go:334] "Generic (PLEG): container finished" podID="c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" containerID="b4256d122a9578d2ec330718f5347f9fbc13135f7a1bbc8107ea8d0b808b7e74" exitCode=0 Jan 29 11:23:15 crc kubenswrapper[4593]: I0129 11:23:15.995504 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" event={"ID":"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb","Type":"ContainerDied","Data":"b4256d122a9578d2ec330718f5347f9fbc13135f7a1bbc8107ea8d0b808b7e74"} Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.527921 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.685087 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") pod \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.685153 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") pod \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.685227 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") pod \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.685452 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") pod \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\" (UID: \"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb\") " Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.706698 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" (UID: "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.715961 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg" (OuterVolumeSpecName: "kube-api-access-mx9bg") pod "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" (UID: "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb"). InnerVolumeSpecName "kube-api-access-mx9bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.723893 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" (UID: "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.728266 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory" (OuterVolumeSpecName: "inventory") pod "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" (UID: "c3e4e3e3-1994-40a5-bab8-d84db2f44ddb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.787616 4593 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.787691 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx9bg\" (UniqueName: \"kubernetes.io/projected/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-kube-api-access-mx9bg\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.787708 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:17 crc kubenswrapper[4593]: I0129 11:23:17.787720 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3e4e3e3-1994-40a5-bab8-d84db2f44ddb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.021863 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" event={"ID":"c3e4e3e3-1994-40a5-bab8-d84db2f44ddb","Type":"ContainerDied","Data":"4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911"} Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.021910 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dce58a5aa3bd2e461af589b8d719f1e5644830c22a908638317259e25587911" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.021979 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.187279 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5"] Jan 29 11:23:18 crc kubenswrapper[4593]: E0129 11:23:18.187873 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.187899 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.188169 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e4e3e3-1994-40a5-bab8-d84db2f44ddb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.188988 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.195222 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.196130 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.196459 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.196455 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.207003 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5"] Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.297688 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.297796 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.297829 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.399269 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.399423 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.399448 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.405104 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.411140 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.426468 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-7tzj5\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:18 crc kubenswrapper[4593]: I0129 11:23:18.522401 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:19 crc kubenswrapper[4593]: I0129 11:23:19.298776 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5"] Jan 29 11:23:20 crc kubenswrapper[4593]: I0129 11:23:20.039897 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" event={"ID":"ce80c16f-5109-46b9-9438-4f05a4132175","Type":"ContainerStarted","Data":"8bb418a005f09c4d6aa7fb45209905c676a3ac1244c00e9b891a5a9b4387ad6a"} Jan 29 11:23:21 crc kubenswrapper[4593]: I0129 11:23:21.050624 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" event={"ID":"ce80c16f-5109-46b9-9438-4f05a4132175","Type":"ContainerStarted","Data":"faea85351cda05ece426a63e59c4f9ccd6e9b1955b988769b98202cd83285465"} Jan 29 11:23:21 crc kubenswrapper[4593]: I0129 11:23:21.082253 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" podStartSLOduration=2.574486239 podStartE2EDuration="3.082218381s" podCreationTimestamp="2026-01-29 11:23:18 +0000 UTC" firstStartedPulling="2026-01-29 11:23:19.307602993 +0000 UTC m=+1465.180637184" lastFinishedPulling="2026-01-29 11:23:19.815335135 +0000 UTC m=+1465.688369326" observedRunningTime="2026-01-29 11:23:21.072316373 +0000 UTC m=+1466.945350564" watchObservedRunningTime="2026-01-29 11:23:21.082218381 +0000 UTC m=+1466.955252582" Jan 29 11:23:23 crc kubenswrapper[4593]: I0129 11:23:23.069873 4593 generic.go:334] "Generic (PLEG): container finished" podID="ce80c16f-5109-46b9-9438-4f05a4132175" containerID="faea85351cda05ece426a63e59c4f9ccd6e9b1955b988769b98202cd83285465" exitCode=0 Jan 29 11:23:23 crc kubenswrapper[4593]: I0129 11:23:23.069924 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" event={"ID":"ce80c16f-5109-46b9-9438-4f05a4132175","Type":"ContainerDied","Data":"faea85351cda05ece426a63e59c4f9ccd6e9b1955b988769b98202cd83285465"} Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.533866 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.631324 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") pod \"ce80c16f-5109-46b9-9438-4f05a4132175\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.631445 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") pod \"ce80c16f-5109-46b9-9438-4f05a4132175\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.631521 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") pod \"ce80c16f-5109-46b9-9438-4f05a4132175\" (UID: \"ce80c16f-5109-46b9-9438-4f05a4132175\") " Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.641995 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx" (OuterVolumeSpecName: "kube-api-access-cxvtx") pod "ce80c16f-5109-46b9-9438-4f05a4132175" (UID: "ce80c16f-5109-46b9-9438-4f05a4132175"). InnerVolumeSpecName "kube-api-access-cxvtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.662072 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce80c16f-5109-46b9-9438-4f05a4132175" (UID: "ce80c16f-5109-46b9-9438-4f05a4132175"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.675953 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory" (OuterVolumeSpecName: "inventory") pod "ce80c16f-5109-46b9-9438-4f05a4132175" (UID: "ce80c16f-5109-46b9-9438-4f05a4132175"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.733571 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxvtx\" (UniqueName: \"kubernetes.io/projected/ce80c16f-5109-46b9-9438-4f05a4132175-kube-api-access-cxvtx\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.733608 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:24 crc kubenswrapper[4593]: I0129 11:23:24.733618 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce80c16f-5109-46b9-9438-4f05a4132175-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.088436 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" event={"ID":"ce80c16f-5109-46b9-9438-4f05a4132175","Type":"ContainerDied","Data":"8bb418a005f09c4d6aa7fb45209905c676a3ac1244c00e9b891a5a9b4387ad6a"} Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.088482 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb418a005f09c4d6aa7fb45209905c676a3ac1244c00e9b891a5a9b4387ad6a" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.088505 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-7tzj5" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.188366 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz"] Jan 29 11:23:25 crc kubenswrapper[4593]: E0129 11:23:25.188900 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce80c16f-5109-46b9-9438-4f05a4132175" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.188922 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce80c16f-5109-46b9-9438-4f05a4132175" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.189119 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce80c16f-5109-46b9-9438-4f05a4132175" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.189822 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.191672 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.192347 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.193300 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.196356 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.209963 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz"] Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.244688 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.244816 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.244991 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.245142 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.346947 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.347376 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.347426 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.347530 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.351621 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.354659 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.366253 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.376464 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:25 crc kubenswrapper[4593]: I0129 11:23:25.505553 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:23:26 crc kubenswrapper[4593]: I0129 11:23:26.076672 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz"] Jan 29 11:23:26 crc kubenswrapper[4593]: I0129 11:23:26.104879 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" event={"ID":"e4241343-d4f5-4690-972e-55f054a93f30","Type":"ContainerStarted","Data":"927630ede3ceb2d2afac7670352e3381e678c1d8aa9b338fadd8176b90b8c0c9"} Jan 29 11:23:27 crc kubenswrapper[4593]: I0129 11:23:27.127880 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" event={"ID":"e4241343-d4f5-4690-972e-55f054a93f30","Type":"ContainerStarted","Data":"003e33f77ddab212895fe8ef3045f9e0f29137cf03f6bd5a01a49972f0f487bc"} Jan 29 11:23:27 crc kubenswrapper[4593]: I0129 11:23:27.169438 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" podStartSLOduration=1.735073285 podStartE2EDuration="2.169416847s" podCreationTimestamp="2026-01-29 11:23:25 +0000 UTC" firstStartedPulling="2026-01-29 11:23:26.073588786 +0000 UTC m=+1471.946622977" lastFinishedPulling="2026-01-29 11:23:26.507932348 +0000 UTC m=+1472.380966539" observedRunningTime="2026-01-29 11:23:27.150458763 +0000 UTC m=+1473.023492954" watchObservedRunningTime="2026-01-29 11:23:27.169416847 +0000 UTC m=+1473.042451038" Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.945851 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.946216 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.946274 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.947057 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:23:33 crc kubenswrapper[4593]: I0129 11:23:33.947114 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa" gracePeriod=600 Jan 29 11:23:35 crc kubenswrapper[4593]: I0129 11:23:35.203846 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa" exitCode=0 Jan 29 11:23:35 crc kubenswrapper[4593]: I0129 11:23:35.203913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa"} Jan 29 11:23:35 crc kubenswrapper[4593]: I0129 11:23:35.205306 4593 scope.go:117] "RemoveContainer" containerID="000d590ca55db27781027868adeaf4e729be5f85280050b0a93300e017c70002" Jan 29 11:23:37 crc kubenswrapper[4593]: I0129 11:23:37.231836 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec"} Jan 29 11:24:11 crc kubenswrapper[4593]: I0129 11:24:11.777765 4593 scope.go:117] "RemoveContainer" containerID="b2737a73be5d76fb8f211f8bf7e6f7f5d5df136a1e001d613ced73be513cce7c" Jan 29 11:24:11 crc kubenswrapper[4593]: I0129 11:24:11.809338 4593 scope.go:117] "RemoveContainer" containerID="fca879370bdf54a12b3a105098148973a13eddb0bbbb835f4a9653bb9e65ca80" Jan 29 11:24:11 crc kubenswrapper[4593]: I0129 11:24:11.835179 4593 scope.go:117] "RemoveContainer" containerID="532ef2b08300e953556c4f80a0efbeeef65f13a2c78db2506158a85df92e08ac" Jan 29 11:24:11 crc kubenswrapper[4593]: I0129 11:24:11.862433 4593 scope.go:117] "RemoveContainer" containerID="6ca508da8e21ef8dd7d2c43f12f73a45b855f01c94f63172557349f3344fc6c9" Jan 29 11:24:19 crc kubenswrapper[4593]: I0129 11:24:19.941724 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:19 crc kubenswrapper[4593]: I0129 11:24:19.944783 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:19 crc kubenswrapper[4593]: I0129 11:24:19.972756 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.145945 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.146308 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.146463 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.248768 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.249308 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.249833 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.250217 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.250223 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.276429 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") pod \"community-operators-4gj62\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:20 crc kubenswrapper[4593]: I0129 11:24:20.570856 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:21 crc kubenswrapper[4593]: I0129 11:24:21.054337 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:22 crc kubenswrapper[4593]: I0129 11:24:22.129682 4593 generic.go:334] "Generic (PLEG): container finished" podID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerID="097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be" exitCode=0 Jan 29 11:24:22 crc kubenswrapper[4593]: I0129 11:24:22.129983 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerDied","Data":"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be"} Jan 29 11:24:22 crc kubenswrapper[4593]: I0129 11:24:22.139660 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerStarted","Data":"0f223bb8ffe465ecc2b4d7adaa6dd0f8d56f5e4a5b1abbf62714c243ab708a1a"} Jan 29 11:24:22 crc kubenswrapper[4593]: I0129 11:24:22.133493 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:24:24 crc kubenswrapper[4593]: I0129 11:24:24.263236 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerStarted","Data":"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde"} Jan 29 11:24:27 crc kubenswrapper[4593]: I0129 11:24:27.597775 4593 generic.go:334] "Generic (PLEG): container finished" podID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerID="13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde" exitCode=0 Jan 29 11:24:27 crc kubenswrapper[4593]: I0129 11:24:27.598184 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerDied","Data":"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde"} Jan 29 11:24:28 crc kubenswrapper[4593]: I0129 11:24:28.610294 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerStarted","Data":"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5"} Jan 29 11:24:28 crc kubenswrapper[4593]: I0129 11:24:28.636071 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4gj62" podStartSLOduration=3.44295429 podStartE2EDuration="9.636036166s" podCreationTimestamp="2026-01-29 11:24:19 +0000 UTC" firstStartedPulling="2026-01-29 11:24:22.133255826 +0000 UTC m=+1528.006290017" lastFinishedPulling="2026-01-29 11:24:28.326337702 +0000 UTC m=+1534.199371893" observedRunningTime="2026-01-29 11:24:28.633468376 +0000 UTC m=+1534.506502567" watchObservedRunningTime="2026-01-29 11:24:28.636036166 +0000 UTC m=+1534.509070357" Jan 29 11:24:30 crc kubenswrapper[4593]: I0129 11:24:30.572661 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:30 crc kubenswrapper[4593]: I0129 11:24:30.572723 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:31 crc kubenswrapper[4593]: I0129 11:24:31.652715 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4gj62" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" probeResult="failure" output=< Jan 29 11:24:31 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:24:31 crc kubenswrapper[4593]: > Jan 29 11:24:41 crc kubenswrapper[4593]: I0129 11:24:41.626240 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4gj62" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" probeResult="failure" output=< Jan 29 11:24:41 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:24:41 crc kubenswrapper[4593]: > Jan 29 11:24:50 crc kubenswrapper[4593]: I0129 11:24:50.621002 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:50 crc kubenswrapper[4593]: I0129 11:24:50.672404 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:51 crc kubenswrapper[4593]: I0129 11:24:51.140103 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:51 crc kubenswrapper[4593]: I0129 11:24:51.846538 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4gj62" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" containerID="cri-o://2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" gracePeriod=2 Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.316856 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.356010 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") pod \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.356513 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") pod \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.356758 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") pod \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\" (UID: \"7cff8d0c-7d4a-4327-9785-6ca7367e906f\") " Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.356893 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities" (OuterVolumeSpecName: "utilities") pod "7cff8d0c-7d4a-4327-9785-6ca7367e906f" (UID: "7cff8d0c-7d4a-4327-9785-6ca7367e906f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.357567 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.375590 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh" (OuterVolumeSpecName: "kube-api-access-njgdh") pod "7cff8d0c-7d4a-4327-9785-6ca7367e906f" (UID: "7cff8d0c-7d4a-4327-9785-6ca7367e906f"). InnerVolumeSpecName "kube-api-access-njgdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.414789 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cff8d0c-7d4a-4327-9785-6ca7367e906f" (UID: "7cff8d0c-7d4a-4327-9785-6ca7367e906f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.460013 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cff8d0c-7d4a-4327-9785-6ca7367e906f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.460047 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njgdh\" (UniqueName: \"kubernetes.io/projected/7cff8d0c-7d4a-4327-9785-6ca7367e906f-kube-api-access-njgdh\") on node \"crc\" DevicePath \"\"" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856781 4593 generic.go:334] "Generic (PLEG): container finished" podID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerID="2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" exitCode=0 Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856845 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4gj62" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856856 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerDied","Data":"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5"} Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856955 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4gj62" event={"ID":"7cff8d0c-7d4a-4327-9785-6ca7367e906f","Type":"ContainerDied","Data":"0f223bb8ffe465ecc2b4d7adaa6dd0f8d56f5e4a5b1abbf62714c243ab708a1a"} Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.856997 4593 scope.go:117] "RemoveContainer" containerID="2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.887046 4593 scope.go:117] "RemoveContainer" containerID="13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.894833 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.930963 4593 scope.go:117] "RemoveContainer" containerID="097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.947011 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4gj62"] Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.982405 4593 scope.go:117] "RemoveContainer" containerID="2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" Jan 29 11:24:52 crc kubenswrapper[4593]: E0129 11:24:52.983112 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5\": container with ID starting with 2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5 not found: ID does not exist" containerID="2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.983233 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5"} err="failed to get container status \"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5\": rpc error: code = NotFound desc = could not find container \"2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5\": container with ID starting with 2c5780002c91c8e018fcb56c2a74b26b357b54216161e87ada00d04d653bfae5 not found: ID does not exist" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.983332 4593 scope.go:117] "RemoveContainer" containerID="13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde" Jan 29 11:24:52 crc kubenswrapper[4593]: E0129 11:24:52.983622 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde\": container with ID starting with 13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde not found: ID does not exist" containerID="13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.983721 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde"} err="failed to get container status \"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde\": rpc error: code = NotFound desc = could not find container \"13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde\": container with ID starting with 13513001d49c1795df7b400c21e30315f2a6c96e41c1f22c236f3f95800aafde not found: ID does not exist" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.983810 4593 scope.go:117] "RemoveContainer" containerID="097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be" Jan 29 11:24:52 crc kubenswrapper[4593]: E0129 11:24:52.984083 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be\": container with ID starting with 097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be not found: ID does not exist" containerID="097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be" Jan 29 11:24:52 crc kubenswrapper[4593]: I0129 11:24:52.984181 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be"} err="failed to get container status \"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be\": rpc error: code = NotFound desc = could not find container \"097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be\": container with ID starting with 097d1deef03774835a1147ef52012071f282058acfc0fbb42b4f04f12e8033be not found: ID does not exist" Jan 29 11:24:53 crc kubenswrapper[4593]: I0129 11:24:53.085739 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" path="/var/lib/kubelet/pods/7cff8d0c-7d4a-4327-9785-6ca7367e906f/volumes" Jan 29 11:25:11 crc kubenswrapper[4593]: I0129 11:25:11.955843 4593 scope.go:117] "RemoveContainer" containerID="87db22d6791489959e08e606893fce26ecb348d061df7a0b1bececa26e54b97e" Jan 29 11:25:12 crc kubenswrapper[4593]: I0129 11:25:12.013949 4593 scope.go:117] "RemoveContainer" containerID="b40c06d60848c18dde2f01bdab763148fbbd484c84e7f102df5e8efc825c8e5d" Jan 29 11:25:12 crc kubenswrapper[4593]: I0129 11:25:12.059528 4593 scope.go:117] "RemoveContainer" containerID="9946bfb35dcb9ca60e203e5220d24dee1ca137e4fc677bef2b4ce91126586731" Jan 29 11:25:12 crc kubenswrapper[4593]: I0129 11:25:12.089686 4593 scope.go:117] "RemoveContainer" containerID="c53181da51f450d9ff6f9c844dc483cdabc6bd935abb96bbb849906b8c60f8a1" Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.086407 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.111770 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.124813 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.138873 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.154044 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cjzzm"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.168729 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c3a7-account-create-update-9b49r"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.183781 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-c4fzt"] Jan 29 11:25:16 crc kubenswrapper[4593]: I0129 11:25:16.195970 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-70b0-account-create-update-c8qbm"] Jan 29 11:25:17 crc kubenswrapper[4593]: I0129 11:25:17.092793 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b4524da-e80b-4bd2-a116-061694417007" path="/var/lib/kubelet/pods/3b4524da-e80b-4bd2-a116-061694417007/volumes" Jan 29 11:25:17 crc kubenswrapper[4593]: I0129 11:25:17.095091 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2687b78-f425-4fae-9af8-7021f3e01e36" path="/var/lib/kubelet/pods/e2687b78-f425-4fae-9af8-7021f3e01e36/volumes" Jan 29 11:25:17 crc kubenswrapper[4593]: I0129 11:25:17.095978 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2eab48b-4545-4fa3-81f1-6247ebcf425e" path="/var/lib/kubelet/pods/f2eab48b-4545-4fa3-81f1-6247ebcf425e/volumes" Jan 29 11:25:17 crc kubenswrapper[4593]: I0129 11:25:17.096949 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb1fb5b-1dc7-487a-b49d-d542eef7af31" path="/var/lib/kubelet/pods/fdb1fb5b-1dc7-487a-b49d-d542eef7af31/volumes" Jan 29 11:25:22 crc kubenswrapper[4593]: I0129 11:25:22.033211 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:25:22 crc kubenswrapper[4593]: I0129 11:25:22.042795 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-pz4nl"] Jan 29 11:25:23 crc kubenswrapper[4593]: I0129 11:25:23.027891 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:25:23 crc kubenswrapper[4593]: I0129 11:25:23.036714 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b99c-account-create-update-49grn"] Jan 29 11:25:23 crc kubenswrapper[4593]: I0129 11:25:23.086301 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12899826-03ea-4b37-b523-74946fd54dee" path="/var/lib/kubelet/pods/12899826-03ea-4b37-b523-74946fd54dee/volumes" Jan 29 11:25:23 crc kubenswrapper[4593]: I0129 11:25:23.087240 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84071c3-9564-41ef-b38f-fd40e1403fa8" path="/var/lib/kubelet/pods/a84071c3-9564-41ef-b38f-fd40e1403fa8/volumes" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.924156 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:32 crc kubenswrapper[4593]: E0129 11:25:32.925327 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.925359 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" Jan 29 11:25:32 crc kubenswrapper[4593]: E0129 11:25:32.925378 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="extract-content" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.925386 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="extract-content" Jan 29 11:25:32 crc kubenswrapper[4593]: E0129 11:25:32.925406 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="extract-utilities" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.925415 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="extract-utilities" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.925721 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cff8d0c-7d4a-4327-9785-6ca7367e906f" containerName="registry-server" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.927544 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:32 crc kubenswrapper[4593]: I0129 11:25:32.942139 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.008027 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.008142 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.008232 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.110405 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.110563 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.110619 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.111035 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.111829 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.131169 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") pod \"redhat-marketplace-n85wt\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.254219 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:33 crc kubenswrapper[4593]: I0129 11:25:33.731932 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:34 crc kubenswrapper[4593]: I0129 11:25:34.254355 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerID="b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2" exitCode=0 Jan 29 11:25:34 crc kubenswrapper[4593]: I0129 11:25:34.254436 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerDied","Data":"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2"} Jan 29 11:25:34 crc kubenswrapper[4593]: I0129 11:25:34.254758 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerStarted","Data":"f2ddb1195350fe2e49e68f4403861bf9781674dc12a681b98af4ebb0c6014187"} Jan 29 11:25:36 crc kubenswrapper[4593]: I0129 11:25:36.272943 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerStarted","Data":"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951"} Jan 29 11:25:38 crc kubenswrapper[4593]: I0129 11:25:38.292537 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerID="e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951" exitCode=0 Jan 29 11:25:38 crc kubenswrapper[4593]: I0129 11:25:38.292733 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerDied","Data":"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951"} Jan 29 11:25:39 crc kubenswrapper[4593]: I0129 11:25:39.039553 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:25:39 crc kubenswrapper[4593]: I0129 11:25:39.048450 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-625ls"] Jan 29 11:25:39 crc kubenswrapper[4593]: I0129 11:25:39.087770 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d59502-9350-4842-bd01-35d55f0b47fa" path="/var/lib/kubelet/pods/56d59502-9350-4842-bd01-35d55f0b47fa/volumes" Jan 29 11:25:40 crc kubenswrapper[4593]: I0129 11:25:40.313072 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerStarted","Data":"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191"} Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.032503 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n85wt" podStartSLOduration=5.756956356 podStartE2EDuration="11.032469614s" podCreationTimestamp="2026-01-29 11:25:32 +0000 UTC" firstStartedPulling="2026-01-29 11:25:34.256994079 +0000 UTC m=+1600.130028270" lastFinishedPulling="2026-01-29 11:25:39.532507337 +0000 UTC m=+1605.405541528" observedRunningTime="2026-01-29 11:25:40.334385092 +0000 UTC m=+1606.207419283" watchObservedRunningTime="2026-01-29 11:25:43.032469614 +0000 UTC m=+1608.905503805" Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.038899 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.045895 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vdz52"] Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.086283 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52b59817-1d9d-431d-8055-cf98107b89a2" path="/var/lib/kubelet/pods/52b59817-1d9d-431d-8055-cf98107b89a2/volumes" Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.254993 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.255045 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:43 crc kubenswrapper[4593]: I0129 11:25:43.307673 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.045312 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.057835 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.069006 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4c8a-account-create-update-psrpm"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.079748 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.090385 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.098540 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-140c-account-create-update-csqgp"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.107193 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.115393 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9hskn"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.124275 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-jgv94"] Jan 29 11:25:44 crc kubenswrapper[4593]: I0129 11:25:44.132462 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-0486-account-create-update-f9r68"] Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.096820 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115d89c5-8038-4b55-9f1d-d0f169ee0b53" path="/var/lib/kubelet/pods/115d89c5-8038-4b55-9f1d-d0f169ee0b53/volumes" Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.098004 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef7a572-9631-4078-a6ed-419d2a4dfdf9" path="/var/lib/kubelet/pods/1ef7a572-9631-4078-a6ed-419d2a4dfdf9/volumes" Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.099259 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d46f220-cb33-4768-91f5-c59e98c41af4" path="/var/lib/kubelet/pods/6d46f220-cb33-4768-91f5-c59e98c41af4/volumes" Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.100177 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe" path="/var/lib/kubelet/pods/7c572c7d-971f-4f21-81cf-f5d5f7d5d9fe/volumes" Jan 29 11:25:45 crc kubenswrapper[4593]: I0129 11:25:45.101936 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbee97db-a8f1-43e0-ac0b-ec58529b2c03" path="/var/lib/kubelet/pods/fbee97db-a8f1-43e0-ac0b-ec58529b2c03/volumes" Jan 29 11:25:53 crc kubenswrapper[4593]: I0129 11:25:53.304556 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:53 crc kubenswrapper[4593]: I0129 11:25:53.363897 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:53 crc kubenswrapper[4593]: I0129 11:25:53.435034 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n85wt" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="registry-server" containerID="cri-o://60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" gracePeriod=2 Jan 29 11:25:53 crc kubenswrapper[4593]: I0129 11:25:53.959525 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.073750 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.084575 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wzm6z"] Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.121521 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") pod \"f5ef266e-6732-412f-82a7-23482ba2dfe2\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.121993 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") pod \"f5ef266e-6732-412f-82a7-23482ba2dfe2\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.122250 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") pod \"f5ef266e-6732-412f-82a7-23482ba2dfe2\" (UID: \"f5ef266e-6732-412f-82a7-23482ba2dfe2\") " Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.123094 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities" (OuterVolumeSpecName: "utilities") pod "f5ef266e-6732-412f-82a7-23482ba2dfe2" (UID: "f5ef266e-6732-412f-82a7-23482ba2dfe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.133515 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5" (OuterVolumeSpecName: "kube-api-access-bbjz5") pod "f5ef266e-6732-412f-82a7-23482ba2dfe2" (UID: "f5ef266e-6732-412f-82a7-23482ba2dfe2"). InnerVolumeSpecName "kube-api-access-bbjz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.155028 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5ef266e-6732-412f-82a7-23482ba2dfe2" (UID: "f5ef266e-6732-412f-82a7-23482ba2dfe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.225221 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.225266 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5ef266e-6732-412f-82a7-23482ba2dfe2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.225282 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbjz5\" (UniqueName: \"kubernetes.io/projected/f5ef266e-6732-412f-82a7-23482ba2dfe2-kube-api-access-bbjz5\") on node \"crc\" DevicePath \"\"" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.451894 4593 generic.go:334] "Generic (PLEG): container finished" podID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerID="60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" exitCode=0 Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.451940 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerDied","Data":"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191"} Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.451966 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n85wt" event={"ID":"f5ef266e-6732-412f-82a7-23482ba2dfe2","Type":"ContainerDied","Data":"f2ddb1195350fe2e49e68f4403861bf9781674dc12a681b98af4ebb0c6014187"} Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.451984 4593 scope.go:117] "RemoveContainer" containerID="60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.452132 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n85wt" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.483997 4593 scope.go:117] "RemoveContainer" containerID="e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.510842 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.515156 4593 scope.go:117] "RemoveContainer" containerID="b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.523777 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n85wt"] Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.613091 4593 scope.go:117] "RemoveContainer" containerID="60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" Jan 29 11:25:54 crc kubenswrapper[4593]: E0129 11:25:54.613747 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191\": container with ID starting with 60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191 not found: ID does not exist" containerID="60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.613883 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191"} err="failed to get container status \"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191\": rpc error: code = NotFound desc = could not find container \"60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191\": container with ID starting with 60ef59731a01398455e8c4438702f5f2a2748cc8338763413d949581bafe6191 not found: ID does not exist" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.613975 4593 scope.go:117] "RemoveContainer" containerID="e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951" Jan 29 11:25:54 crc kubenswrapper[4593]: E0129 11:25:54.615087 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951\": container with ID starting with e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951 not found: ID does not exist" containerID="e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.615151 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951"} err="failed to get container status \"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951\": rpc error: code = NotFound desc = could not find container \"e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951\": container with ID starting with e99a6fceba0a5b98e17f5cc19308deeca9c2b4760edddc3d455131af64f66951 not found: ID does not exist" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.615181 4593 scope.go:117] "RemoveContainer" containerID="b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2" Jan 29 11:25:54 crc kubenswrapper[4593]: E0129 11:25:54.617000 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2\": container with ID starting with b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2 not found: ID does not exist" containerID="b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2" Jan 29 11:25:54 crc kubenswrapper[4593]: I0129 11:25:54.617034 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2"} err="failed to get container status \"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2\": rpc error: code = NotFound desc = could not find container \"b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2\": container with ID starting with b5319e51bb53037f41058c4b388b9111c7b6d25cb642a1e92f01aa92c10930f2 not found: ID does not exist" Jan 29 11:25:55 crc kubenswrapper[4593]: I0129 11:25:55.093882 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0b4a25-540c-47dd-96fb-fdc6872721b5" path="/var/lib/kubelet/pods/9c0b4a25-540c-47dd-96fb-fdc6872721b5/volumes" Jan 29 11:25:55 crc kubenswrapper[4593]: I0129 11:25:55.095103 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" path="/var/lib/kubelet/pods/f5ef266e-6732-412f-82a7-23482ba2dfe2/volumes" Jan 29 11:26:03 crc kubenswrapper[4593]: I0129 11:26:03.946183 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:26:03 crc kubenswrapper[4593]: I0129 11:26:03.946756 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.198142 4593 scope.go:117] "RemoveContainer" containerID="18ec4b46dd2b143a4699e4f0f9fb21bf0908d4fea6194256ca5d46a4b1e3154b" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.261985 4593 scope.go:117] "RemoveContainer" containerID="8daab26085422d8b821fec9dd8845576bd1f7996b7bd02a206e4ec1ed954891a" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.292755 4593 scope.go:117] "RemoveContainer" containerID="cfeb01d9eafd6f66b4b9db53f4dc0ef8f8de91ea87a6bf0dc6e1a2b4cfb6bce8" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.338490 4593 scope.go:117] "RemoveContainer" containerID="43d82ed1472c3625ce9296a41e8408518af652ca97d81bd779f6e88331c78c4e" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.388215 4593 scope.go:117] "RemoveContainer" containerID="2e1d0fad53de84474f89284c6a88dc3a72dfb695af32b237f2378dd7177ae8c5" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.441922 4593 scope.go:117] "RemoveContainer" containerID="b2686e149913ab0d7eb8e1c1ab82711e8bc8d0f1e7c674ad1bb843e01690c119" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.483045 4593 scope.go:117] "RemoveContainer" containerID="d302776b71ae9de08283f287bc6180cc80cb27e0867558e7d6ef7199f716f657" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.513846 4593 scope.go:117] "RemoveContainer" containerID="f4b832d6a02cddde771b6eeb4da2b7e8c024cb3a623b350dff1e411d17b9ecfd" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.541930 4593 scope.go:117] "RemoveContainer" containerID="26e9d793caead0da7c6fbe2d2cc88998f753f02199ec672516904069fc61c2fc" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.560730 4593 scope.go:117] "RemoveContainer" containerID="db6e520018218e0ecd1d4a8d69f63a0e96eea393f5e0abbccf345503319fb4c2" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.592221 4593 scope.go:117] "RemoveContainer" containerID="b2e16a35b6612eefbbea849496217b01c0c3973f0a5bc7ad6ae362ff548b8cf0" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.673410 4593 scope.go:117] "RemoveContainer" containerID="9d37cf9a7f03d5742ea9e7314623a8e8f189e15526f469c97b71739526cfc70b" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.714401 4593 scope.go:117] "RemoveContainer" containerID="c00b7731a137cc5e16b524de8c2c6a1402d07e79205488315ad3920c71b523b5" Jan 29 11:26:12 crc kubenswrapper[4593]: I0129 11:26:12.753799 4593 scope.go:117] "RemoveContainer" containerID="1146c75a258cb4ad7f71cc2e37d3a74813526e1b88d59d1880e58f1ae91dd7d1" Jan 29 11:26:33 crc kubenswrapper[4593]: I0129 11:26:33.946961 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:26:33 crc kubenswrapper[4593]: I0129 11:26:33.947481 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.762974 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:26:46 crc kubenswrapper[4593]: E0129 11:26:46.763939 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="extract-utilities" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.763954 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="extract-utilities" Jan 29 11:26:46 crc kubenswrapper[4593]: E0129 11:26:46.763978 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="extract-content" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.763984 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="extract-content" Jan 29 11:26:46 crc kubenswrapper[4593]: E0129 11:26:46.763994 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="registry-server" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.764000 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="registry-server" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.764357 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5ef266e-6732-412f-82a7-23482ba2dfe2" containerName="registry-server" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.765729 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.801688 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.912909 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.912988 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:46 crc kubenswrapper[4593]: I0129 11:26:46.913231 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.015414 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.015462 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.015502 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.015960 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.018928 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.038224 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") pod \"certified-operators-jqjbm\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.098112 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:47 crc kubenswrapper[4593]: I0129 11:26:47.589785 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:26:48 crc kubenswrapper[4593]: I0129 11:26:48.032665 4593 generic.go:334] "Generic (PLEG): container finished" podID="86e2d453-9800-4924-84df-86f0f43e5d99" containerID="eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7" exitCode=0 Jan 29 11:26:48 crc kubenswrapper[4593]: I0129 11:26:48.032861 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerDied","Data":"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7"} Jan 29 11:26:48 crc kubenswrapper[4593]: I0129 11:26:48.033015 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerStarted","Data":"67966c1309c45a48c63afccd47f924ae485ed1b5ff7fd66be898dc112116f944"} Jan 29 11:26:49 crc kubenswrapper[4593]: I0129 11:26:49.046574 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerStarted","Data":"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc"} Jan 29 11:26:52 crc kubenswrapper[4593]: I0129 11:26:52.077439 4593 generic.go:334] "Generic (PLEG): container finished" podID="86e2d453-9800-4924-84df-86f0f43e5d99" containerID="f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc" exitCode=0 Jan 29 11:26:52 crc kubenswrapper[4593]: I0129 11:26:52.077523 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerDied","Data":"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc"} Jan 29 11:26:53 crc kubenswrapper[4593]: I0129 11:26:53.090986 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerStarted","Data":"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378"} Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.098710 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.099971 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.154706 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.188262 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqjbm" podStartSLOduration=6.486181662 podStartE2EDuration="11.188238588s" podCreationTimestamp="2026-01-29 11:26:46 +0000 UTC" firstStartedPulling="2026-01-29 11:26:48.034892663 +0000 UTC m=+1673.907926854" lastFinishedPulling="2026-01-29 11:26:52.736949589 +0000 UTC m=+1678.609983780" observedRunningTime="2026-01-29 11:26:53.122773003 +0000 UTC m=+1678.995807204" watchObservedRunningTime="2026-01-29 11:26:57.188238588 +0000 UTC m=+1683.061272779" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.211887 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:57 crc kubenswrapper[4593]: I0129 11:26:57.399650 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.173409 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jqjbm" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="registry-server" containerID="cri-o://c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" gracePeriod=2 Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.646702 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.784571 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") pod \"86e2d453-9800-4924-84df-86f0f43e5d99\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.784770 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") pod \"86e2d453-9800-4924-84df-86f0f43e5d99\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.784903 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") pod \"86e2d453-9800-4924-84df-86f0f43e5d99\" (UID: \"86e2d453-9800-4924-84df-86f0f43e5d99\") " Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.786425 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities" (OuterVolumeSpecName: "utilities") pod "86e2d453-9800-4924-84df-86f0f43e5d99" (UID: "86e2d453-9800-4924-84df-86f0f43e5d99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.793021 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9" (OuterVolumeSpecName: "kube-api-access-xvng9") pod "86e2d453-9800-4924-84df-86f0f43e5d99" (UID: "86e2d453-9800-4924-84df-86f0f43e5d99"). InnerVolumeSpecName "kube-api-access-xvng9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.846500 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86e2d453-9800-4924-84df-86f0f43e5d99" (UID: "86e2d453-9800-4924-84df-86f0f43e5d99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.886791 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvng9\" (UniqueName: \"kubernetes.io/projected/86e2d453-9800-4924-84df-86f0f43e5d99-kube-api-access-xvng9\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.886828 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:26:59 crc kubenswrapper[4593]: I0129 11:26:59.886839 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86e2d453-9800-4924-84df-86f0f43e5d99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182572 4593 generic.go:334] "Generic (PLEG): container finished" podID="86e2d453-9800-4924-84df-86f0f43e5d99" containerID="c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" exitCode=0 Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182617 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerDied","Data":"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378"} Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182652 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqjbm" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182671 4593 scope.go:117] "RemoveContainer" containerID="c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.182660 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqjbm" event={"ID":"86e2d453-9800-4924-84df-86f0f43e5d99","Type":"ContainerDied","Data":"67966c1309c45a48c63afccd47f924ae485ed1b5ff7fd66be898dc112116f944"} Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.210732 4593 scope.go:117] "RemoveContainer" containerID="f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.238410 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.272837 4593 scope.go:117] "RemoveContainer" containerID="eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.275982 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jqjbm"] Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.336717 4593 scope.go:117] "RemoveContainer" containerID="c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" Jan 29 11:27:00 crc kubenswrapper[4593]: E0129 11:27:00.340827 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378\": container with ID starting with c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378 not found: ID does not exist" containerID="c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.340887 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378"} err="failed to get container status \"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378\": rpc error: code = NotFound desc = could not find container \"c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378\": container with ID starting with c6afa4b7206057f1b10675f27b3095b4028a9fb8351a45cfeeda0413104ef378 not found: ID does not exist" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.340924 4593 scope.go:117] "RemoveContainer" containerID="f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc" Jan 29 11:27:00 crc kubenswrapper[4593]: E0129 11:27:00.341620 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc\": container with ID starting with f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc not found: ID does not exist" containerID="f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.341666 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc"} err="failed to get container status \"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc\": rpc error: code = NotFound desc = could not find container \"f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc\": container with ID starting with f6ce646fc478ffd2b851c3bfb90c157d20f4c1b31c6eb71cef5ff6556bb895bc not found: ID does not exist" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.341686 4593 scope.go:117] "RemoveContainer" containerID="eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7" Jan 29 11:27:00 crc kubenswrapper[4593]: E0129 11:27:00.342338 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7\": container with ID starting with eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7 not found: ID does not exist" containerID="eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7" Jan 29 11:27:00 crc kubenswrapper[4593]: I0129 11:27:00.342365 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7"} err="failed to get container status \"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7\": rpc error: code = NotFound desc = could not find container \"eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7\": container with ID starting with eb460d38ed530ffd615538bb7baa7581b00e46748a91a7c9f2eff3d9ab864da7 not found: ID does not exist" Jan 29 11:27:01 crc kubenswrapper[4593]: I0129 11:27:01.085387 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" path="/var/lib/kubelet/pods/86e2d453-9800-4924-84df-86f0f43e5d99/volumes" Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.946249 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.946611 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.946744 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.947500 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:27:03 crc kubenswrapper[4593]: I0129 11:27:03.947569 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" gracePeriod=600 Jan 29 11:27:04 crc kubenswrapper[4593]: E0129 11:27:04.075589 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:04 crc kubenswrapper[4593]: I0129 11:27:04.227359 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" exitCode=0 Jan 29 11:27:04 crc kubenswrapper[4593]: I0129 11:27:04.227455 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec"} Jan 29 11:27:04 crc kubenswrapper[4593]: I0129 11:27:04.227872 4593 scope.go:117] "RemoveContainer" containerID="6f628dc297b127220882a1d8752d50a08dc9b333c2a314b358e3c3d4a79bcfaa" Jan 29 11:27:04 crc kubenswrapper[4593]: I0129 11:27:04.228540 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:04 crc kubenswrapper[4593]: E0129 11:27:04.228861 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:06 crc kubenswrapper[4593]: I0129 11:27:06.249548 4593 generic.go:334] "Generic (PLEG): container finished" podID="e4241343-d4f5-4690-972e-55f054a93f30" containerID="003e33f77ddab212895fe8ef3045f9e0f29137cf03f6bd5a01a49972f0f487bc" exitCode=0 Jan 29 11:27:06 crc kubenswrapper[4593]: I0129 11:27:06.249592 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" event={"ID":"e4241343-d4f5-4690-972e-55f054a93f30","Type":"ContainerDied","Data":"003e33f77ddab212895fe8ef3045f9e0f29137cf03f6bd5a01a49972f0f487bc"} Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.703195 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.742188 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") pod \"e4241343-d4f5-4690-972e-55f054a93f30\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.742392 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") pod \"e4241343-d4f5-4690-972e-55f054a93f30\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.742453 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") pod \"e4241343-d4f5-4690-972e-55f054a93f30\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.742597 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") pod \"e4241343-d4f5-4690-972e-55f054a93f30\" (UID: \"e4241343-d4f5-4690-972e-55f054a93f30\") " Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.757001 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj" (OuterVolumeSpecName: "kube-api-access-s8jfj") pod "e4241343-d4f5-4690-972e-55f054a93f30" (UID: "e4241343-d4f5-4690-972e-55f054a93f30"). InnerVolumeSpecName "kube-api-access-s8jfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.757136 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e4241343-d4f5-4690-972e-55f054a93f30" (UID: "e4241343-d4f5-4690-972e-55f054a93f30"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.782809 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory" (OuterVolumeSpecName: "inventory") pod "e4241343-d4f5-4690-972e-55f054a93f30" (UID: "e4241343-d4f5-4690-972e-55f054a93f30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.785577 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4241343-d4f5-4690-972e-55f054a93f30" (UID: "e4241343-d4f5-4690-972e-55f054a93f30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.846272 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8jfj\" (UniqueName: \"kubernetes.io/projected/e4241343-d4f5-4690-972e-55f054a93f30-kube-api-access-s8jfj\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.846541 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.846675 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:07 crc kubenswrapper[4593]: I0129 11:27:07.846758 4593 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4241343-d4f5-4690-972e-55f054a93f30-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.273792 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" event={"ID":"e4241343-d4f5-4690-972e-55f054a93f30","Type":"ContainerDied","Data":"927630ede3ceb2d2afac7670352e3381e678c1d8aa9b338fadd8176b90b8c0c9"} Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.273865 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="927630ede3ceb2d2afac7670352e3381e678c1d8aa9b338fadd8176b90b8c0c9" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.273907 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.382377 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j"] Jan 29 11:27:08 crc kubenswrapper[4593]: E0129 11:27:08.383192 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4241343-d4f5-4690-972e-55f054a93f30" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383216 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4241343-d4f5-4690-972e-55f054a93f30" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 11:27:08 crc kubenswrapper[4593]: E0129 11:27:08.383237 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="extract-utilities" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383246 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="extract-utilities" Jan 29 11:27:08 crc kubenswrapper[4593]: E0129 11:27:08.383255 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="registry-server" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383264 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="registry-server" Jan 29 11:27:08 crc kubenswrapper[4593]: E0129 11:27:08.383278 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="extract-content" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383309 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="extract-content" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383586 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e2d453-9800-4924-84df-86f0f43e5d99" containerName="registry-server" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.383608 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4241343-d4f5-4690-972e-55f054a93f30" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.386866 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.389929 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.389982 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.390158 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.390300 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.396090 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j"] Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.459735 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.459971 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.460120 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.562131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.562221 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.562261 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.572070 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.581059 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.583106 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g462j\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:08 crc kubenswrapper[4593]: I0129 11:27:08.706375 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:27:09 crc kubenswrapper[4593]: I0129 11:27:09.248081 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j"] Jan 29 11:27:09 crc kubenswrapper[4593]: I0129 11:27:09.283148 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" event={"ID":"fee0ef55-8edb-456c-9344-98a3b34d3aab","Type":"ContainerStarted","Data":"b0ae0b25831e041bfe96f6c4a3d79e01d947c880509926da1feb03c9559ebd7a"} Jan 29 11:27:11 crc kubenswrapper[4593]: I0129 11:27:11.343553 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" event={"ID":"fee0ef55-8edb-456c-9344-98a3b34d3aab","Type":"ContainerStarted","Data":"5c199554479c727e40d38e1c73ab1886c6ddf721c6751444cd8da17a69216ec5"} Jan 29 11:27:11 crc kubenswrapper[4593]: I0129 11:27:11.370944 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" podStartSLOduration=2.268501322 podStartE2EDuration="3.370892542s" podCreationTimestamp="2026-01-29 11:27:08 +0000 UTC" firstStartedPulling="2026-01-29 11:27:09.250961261 +0000 UTC m=+1695.123995462" lastFinishedPulling="2026-01-29 11:27:10.353352491 +0000 UTC m=+1696.226386682" observedRunningTime="2026-01-29 11:27:11.360813249 +0000 UTC m=+1697.233847440" watchObservedRunningTime="2026-01-29 11:27:11.370892542 +0000 UTC m=+1697.243926733" Jan 29 11:27:13 crc kubenswrapper[4593]: I0129 11:27:13.205112 4593 scope.go:117] "RemoveContainer" containerID="660df2719e4927e909a269c0af10ce5b75a1a0017c3734f8e647f89f3520914c" Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.065553 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.081315 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:15 crc kubenswrapper[4593]: E0129 11:27:15.081652 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.100146 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.100194 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-dd7hj"] Jan 29 11:27:15 crc kubenswrapper[4593]: I0129 11:27:15.106824 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8z7b6"] Jan 29 11:27:16 crc kubenswrapper[4593]: I0129 11:27:16.036565 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:27:16 crc kubenswrapper[4593]: I0129 11:27:16.047253 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-db54x"] Jan 29 11:27:17 crc kubenswrapper[4593]: I0129 11:27:17.086734 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31f590aa-412a-41ab-92fd-2202c9b456b4" path="/var/lib/kubelet/pods/31f590aa-412a-41ab-92fd-2202c9b456b4/volumes" Jan 29 11:27:17 crc kubenswrapper[4593]: I0129 11:27:17.087418 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9" path="/var/lib/kubelet/pods/3fe4b5cd-471d-49d2-bf2b-c3a6bac48aa9/volumes" Jan 29 11:27:17 crc kubenswrapper[4593]: I0129 11:27:17.088040 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6bbbb39-f79c-4647-976b-6225ac21e63b" path="/var/lib/kubelet/pods/a6bbbb39-f79c-4647-976b-6225ac21e63b/volumes" Jan 29 11:27:24 crc kubenswrapper[4593]: I0129 11:27:24.034047 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:27:24 crc kubenswrapper[4593]: I0129 11:27:24.046701 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2wbrt"] Jan 29 11:27:25 crc kubenswrapper[4593]: I0129 11:27:25.086062 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39458c0-d624-4ed0-8444-417e479028d2" path="/var/lib/kubelet/pods/c39458c0-d624-4ed0-8444-417e479028d2/volumes" Jan 29 11:27:27 crc kubenswrapper[4593]: I0129 11:27:27.075283 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:27 crc kubenswrapper[4593]: E0129 11:27:27.076856 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:31 crc kubenswrapper[4593]: I0129 11:27:31.042811 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:27:31 crc kubenswrapper[4593]: I0129 11:27:31.053826 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qqbm9"] Jan 29 11:27:31 crc kubenswrapper[4593]: I0129 11:27:31.086306 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0467fe-4786-4231-bf52-8a305e9a4f89" path="/var/lib/kubelet/pods/9a0467fe-4786-4231-bf52-8a305e9a4f89/volumes" Jan 29 11:27:40 crc kubenswrapper[4593]: I0129 11:27:40.074837 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:40 crc kubenswrapper[4593]: E0129 11:27:40.076654 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:53 crc kubenswrapper[4593]: I0129 11:27:53.075693 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:27:53 crc kubenswrapper[4593]: E0129 11:27:53.077788 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:27:57 crc kubenswrapper[4593]: I0129 11:27:57.064600 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:27:57 crc kubenswrapper[4593]: I0129 11:27:57.082557 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qt4jn"] Jan 29 11:27:57 crc kubenswrapper[4593]: I0129 11:27:57.108213 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1563c063-cd19-4793-97c0-45ca3e4a3e0c" path="/var/lib/kubelet/pods/1563c063-cd19-4793-97c0-45ca3e4a3e0c/volumes" Jan 29 11:28:04 crc kubenswrapper[4593]: I0129 11:28:04.075939 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:04 crc kubenswrapper[4593]: E0129 11:28:04.076790 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.299268 4593 scope.go:117] "RemoveContainer" containerID="06197cae1e3adecc87ccca3058356e85b083a773c3ebd8eeabc6c5475d59dd8e" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.346510 4593 scope.go:117] "RemoveContainer" containerID="dc02c784a57ca12374f0aced757e32f43b54151f61a6897de1dd6a96f158aedc" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.402785 4593 scope.go:117] "RemoveContainer" containerID="0f2f3f0be6cdd2683b007fbff3ab49a0dd093c0aa8e7bd19c6543357b5ba29b3" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.469415 4593 scope.go:117] "RemoveContainer" containerID="b6f550864b30cf24b91a51e513d7e513cf9d2ef7137812c6edc720f9813967f9" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.512127 4593 scope.go:117] "RemoveContainer" containerID="99ff344d90d5bdd893d1e77e101cd6e34638c02acf7127cecbfee61fab7d69ad" Jan 29 11:28:13 crc kubenswrapper[4593]: I0129 11:28:13.560025 4593 scope.go:117] "RemoveContainer" containerID="6029f6551650b545bead0d4f37b1f5f3a81f76cf7f6f139456a1354a00bcaf99" Jan 29 11:28:15 crc kubenswrapper[4593]: I0129 11:28:15.111654 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:15 crc kubenswrapper[4593]: E0129 11:28:15.113347 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:28:28 crc kubenswrapper[4593]: I0129 11:28:28.075297 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:28 crc kubenswrapper[4593]: E0129 11:28:28.076433 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:28:39 crc kubenswrapper[4593]: I0129 11:28:39.075790 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:39 crc kubenswrapper[4593]: E0129 11:28:39.076676 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:28:51 crc kubenswrapper[4593]: I0129 11:28:51.075169 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:28:51 crc kubenswrapper[4593]: E0129 11:28:51.076024 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:03 crc kubenswrapper[4593]: I0129 11:29:03.075802 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:03 crc kubenswrapper[4593]: E0129 11:29:03.076723 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:04 crc kubenswrapper[4593]: I0129 11:29:04.051095 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:29:04 crc kubenswrapper[4593]: I0129 11:29:04.062514 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-02db-account-create-update-8h7xj"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.055330 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.070386 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.092946 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc0715e-34d0-4d5e-a8cc-5809adc6e264" path="/var/lib/kubelet/pods/3cc0715e-34d0-4d5e-a8cc-5809adc6e264/volumes" Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.098786 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.100027 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-bbb2-account-create-update-nq54g"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.110504 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.117914 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vfj8w"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.127025 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.135882 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-207d-account-create-update-n289g"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.144348 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-86jg9"] Jan 29 11:29:05 crc kubenswrapper[4593]: I0129 11:29:05.152847 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vpcpg"] Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.095795 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5349ab78-1643-47e8-bfca-20d31e2f459f" path="/var/lib/kubelet/pods/5349ab78-1643-47e8-bfca-20d31e2f459f/volumes" Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.097168 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b37d23e-84cc-4059-a109-18fec66cd168" path="/var/lib/kubelet/pods/6b37d23e-84cc-4059-a109-18fec66cd168/volumes" Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.098360 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c560b58-f036-4946-aca6-d59c9502954e" path="/var/lib/kubelet/pods/8c560b58-f036-4946-aca6-d59c9502954e/volumes" Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.099442 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd801e2-136a-408b-a7e6-ab9a8dcfdd3b" path="/var/lib/kubelet/pods/afd801e2-136a-408b-a7e6-ab9a8dcfdd3b/volumes" Jan 29 11:29:07 crc kubenswrapper[4593]: I0129 11:29:07.101460 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60bb61f-5204-4149-9922-70c6b0916c48" path="/var/lib/kubelet/pods/d60bb61f-5204-4149-9922-70c6b0916c48/volumes" Jan 29 11:29:13 crc kubenswrapper[4593]: I0129 11:29:13.799843 4593 scope.go:117] "RemoveContainer" containerID="b4acf56e0984e495aea7b87f5e09b414ac2d3ef8fb7a27a8f9cffdcbe98b5b8c" Jan 29 11:29:13 crc kubenswrapper[4593]: I0129 11:29:13.832521 4593 scope.go:117] "RemoveContainer" containerID="6c0216f7cb045c8475f6c48e3f50c549e3404a77f63e6ee461ea5240850a1620" Jan 29 11:29:13 crc kubenswrapper[4593]: I0129 11:29:13.946614 4593 scope.go:117] "RemoveContainer" containerID="690f9e7a9c00c85e345179d71bb55173000c29b38e2987305e760408ff69f398" Jan 29 11:29:14 crc kubenswrapper[4593]: I0129 11:29:14.023779 4593 scope.go:117] "RemoveContainer" containerID="4617f4b77856e9af93c03f010b2af2c31551118ca1d06a956c46e256c4dacc4c" Jan 29 11:29:14 crc kubenswrapper[4593]: I0129 11:29:14.062201 4593 scope.go:117] "RemoveContainer" containerID="9d911603c45f632b1589627458c99f256ab970b9f33d34d26ebd6abdb5c39ade" Jan 29 11:29:14 crc kubenswrapper[4593]: I0129 11:29:14.105942 4593 scope.go:117] "RemoveContainer" containerID="97bad51c47183a029a20953701c3f31d5be0e445cb1a365cf05eca76d77d4eb6" Jan 29 11:29:16 crc kubenswrapper[4593]: I0129 11:29:16.077123 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:16 crc kubenswrapper[4593]: E0129 11:29:16.077726 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:31 crc kubenswrapper[4593]: I0129 11:29:31.076023 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:31 crc kubenswrapper[4593]: E0129 11:29:31.076921 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:34 crc kubenswrapper[4593]: I0129 11:29:34.057672 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" event={"ID":"fee0ef55-8edb-456c-9344-98a3b34d3aab","Type":"ContainerDied","Data":"5c199554479c727e40d38e1c73ab1886c6ddf721c6751444cd8da17a69216ec5"} Jan 29 11:29:34 crc kubenswrapper[4593]: I0129 11:29:34.057616 4593 generic.go:334] "Generic (PLEG): container finished" podID="fee0ef55-8edb-456c-9344-98a3b34d3aab" containerID="5c199554479c727e40d38e1c73ab1886c6ddf721c6751444cd8da17a69216ec5" exitCode=0 Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.488249 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.626908 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") pod \"fee0ef55-8edb-456c-9344-98a3b34d3aab\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.627433 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") pod \"fee0ef55-8edb-456c-9344-98a3b34d3aab\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.627549 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") pod \"fee0ef55-8edb-456c-9344-98a3b34d3aab\" (UID: \"fee0ef55-8edb-456c-9344-98a3b34d3aab\") " Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.638925 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk" (OuterVolumeSpecName: "kube-api-access-4lpsk") pod "fee0ef55-8edb-456c-9344-98a3b34d3aab" (UID: "fee0ef55-8edb-456c-9344-98a3b34d3aab"). InnerVolumeSpecName "kube-api-access-4lpsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.658490 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory" (OuterVolumeSpecName: "inventory") pod "fee0ef55-8edb-456c-9344-98a3b34d3aab" (UID: "fee0ef55-8edb-456c-9344-98a3b34d3aab"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.666026 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fee0ef55-8edb-456c-9344-98a3b34d3aab" (UID: "fee0ef55-8edb-456c-9344-98a3b34d3aab"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.733433 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.733506 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fee0ef55-8edb-456c-9344-98a3b34d3aab-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:35 crc kubenswrapper[4593]: I0129 11:29:35.733525 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lpsk\" (UniqueName: \"kubernetes.io/projected/fee0ef55-8edb-456c-9344-98a3b34d3aab-kube-api-access-4lpsk\") on node \"crc\" DevicePath \"\"" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.078870 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" event={"ID":"fee0ef55-8edb-456c-9344-98a3b34d3aab","Type":"ContainerDied","Data":"b0ae0b25831e041bfe96f6c4a3d79e01d947c880509926da1feb03c9559ebd7a"} Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.078941 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ae0b25831e041bfe96f6c4a3d79e01d947c880509926da1feb03c9559ebd7a" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.078983 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g462j" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.186072 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg"] Jan 29 11:29:36 crc kubenswrapper[4593]: E0129 11:29:36.187293 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee0ef55-8edb-456c-9344-98a3b34d3aab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.187448 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee0ef55-8edb-456c-9344-98a3b34d3aab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.187872 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee0ef55-8edb-456c-9344-98a3b34d3aab" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.188978 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.191686 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.202813 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.203172 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.205393 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg"] Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.205777 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.350858 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.351457 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.351657 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.453329 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.453698 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.453874 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.459580 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.460300 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.469927 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-27mbg\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:36 crc kubenswrapper[4593]: I0129 11:29:36.506449 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:29:37 crc kubenswrapper[4593]: I0129 11:29:37.054258 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg"] Jan 29 11:29:37 crc kubenswrapper[4593]: I0129 11:29:37.059079 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:29:37 crc kubenswrapper[4593]: I0129 11:29:37.094913 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" event={"ID":"80d7dd41-691a-4411-97c2-91245d43b8ea","Type":"ContainerStarted","Data":"e32031e06aad254861bb54923223ee1752de351cad7516014ab280e7d0197bdf"} Jan 29 11:29:38 crc kubenswrapper[4593]: I0129 11:29:38.109589 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" event={"ID":"80d7dd41-691a-4411-97c2-91245d43b8ea","Type":"ContainerStarted","Data":"58d92a6cf90bfa5b104f1ad9533044c99bc8076e9572dec59724d020f65d5b0d"} Jan 29 11:29:38 crc kubenswrapper[4593]: I0129 11:29:38.128662 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" podStartSLOduration=1.494348211 podStartE2EDuration="2.128598825s" podCreationTimestamp="2026-01-29 11:29:36 +0000 UTC" firstStartedPulling="2026-01-29 11:29:37.058736602 +0000 UTC m=+1842.931770793" lastFinishedPulling="2026-01-29 11:29:37.692987216 +0000 UTC m=+1843.566021407" observedRunningTime="2026-01-29 11:29:38.124860045 +0000 UTC m=+1843.997894246" watchObservedRunningTime="2026-01-29 11:29:38.128598825 +0000 UTC m=+1844.001633016" Jan 29 11:29:46 crc kubenswrapper[4593]: I0129 11:29:46.074380 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:46 crc kubenswrapper[4593]: E0129 11:29:46.075083 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:48 crc kubenswrapper[4593]: I0129 11:29:48.596066 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-6d898fd894-sh94p" podUID="960bb326-dc22-43e5-bc4f-05c9ce9e26a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 11:29:57 crc kubenswrapper[4593]: I0129 11:29:57.075866 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:29:57 crc kubenswrapper[4593]: E0129 11:29:57.076730 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:29:58 crc kubenswrapper[4593]: I0129 11:29:58.055781 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:29:58 crc kubenswrapper[4593]: I0129 11:29:58.069118 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vkj44"] Jan 29 11:29:59 crc kubenswrapper[4593]: I0129 11:29:59.085778 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a120fd3-e300-459e-9c9b-dd0f3da25621" path="/var/lib/kubelet/pods/9a120fd3-e300-459e-9c9b-dd0f3da25621/volumes" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.156932 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.160786 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.165017 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.168166 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.172671 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.243427 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.243604 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.243681 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.346054 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.346432 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.346729 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.350269 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.363963 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.368074 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") pod \"collect-profiles-29494770-zf92j\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.488045 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:00 crc kubenswrapper[4593]: I0129 11:30:00.933735 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 11:30:01 crc kubenswrapper[4593]: I0129 11:30:01.023934 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" event={"ID":"fe3bb310-71b1-4d29-a302-e06181c04f5f","Type":"ContainerStarted","Data":"c4c6458cd97ffb2aeecd77496fd68f83d6c2c4298bddc9c470b708adf9f616a5"} Jan 29 11:30:02 crc kubenswrapper[4593]: I0129 11:30:02.050460 4593 generic.go:334] "Generic (PLEG): container finished" podID="fe3bb310-71b1-4d29-a302-e06181c04f5f" containerID="f5dc8ed87db86aba663f3bdc857a868a9a85bafb38e9e0269844cbb77f36242a" exitCode=0 Jan 29 11:30:02 crc kubenswrapper[4593]: I0129 11:30:02.050567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" event={"ID":"fe3bb310-71b1-4d29-a302-e06181c04f5f","Type":"ContainerDied","Data":"f5dc8ed87db86aba663f3bdc857a868a9a85bafb38e9e0269844cbb77f36242a"} Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.333505 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.410658 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") pod \"fe3bb310-71b1-4d29-a302-e06181c04f5f\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.410872 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") pod \"fe3bb310-71b1-4d29-a302-e06181c04f5f\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.410901 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") pod \"fe3bb310-71b1-4d29-a302-e06181c04f5f\" (UID: \"fe3bb310-71b1-4d29-a302-e06181c04f5f\") " Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.412202 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe3bb310-71b1-4d29-a302-e06181c04f5f" (UID: "fe3bb310-71b1-4d29-a302-e06181c04f5f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.415904 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe3bb310-71b1-4d29-a302-e06181c04f5f" (UID: "fe3bb310-71b1-4d29-a302-e06181c04f5f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.418404 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h" (OuterVolumeSpecName: "kube-api-access-v479h") pod "fe3bb310-71b1-4d29-a302-e06181c04f5f" (UID: "fe3bb310-71b1-4d29-a302-e06181c04f5f"). InnerVolumeSpecName "kube-api-access-v479h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.512714 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe3bb310-71b1-4d29-a302-e06181c04f5f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.513001 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe3bb310-71b1-4d29-a302-e06181c04f5f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:03 crc kubenswrapper[4593]: I0129 11:30:03.513068 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v479h\" (UniqueName: \"kubernetes.io/projected/fe3bb310-71b1-4d29-a302-e06181c04f5f-kube-api-access-v479h\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:04 crc kubenswrapper[4593]: I0129 11:30:04.070451 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" event={"ID":"fe3bb310-71b1-4d29-a302-e06181c04f5f","Type":"ContainerDied","Data":"c4c6458cd97ffb2aeecd77496fd68f83d6c2c4298bddc9c470b708adf9f616a5"} Jan 29 11:30:04 crc kubenswrapper[4593]: I0129 11:30:04.070498 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4c6458cd97ffb2aeecd77496fd68f83d6c2c4298bddc9c470b708adf9f616a5" Jan 29 11:30:04 crc kubenswrapper[4593]: I0129 11:30:04.070521 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j" Jan 29 11:30:12 crc kubenswrapper[4593]: I0129 11:30:12.074847 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:30:12 crc kubenswrapper[4593]: E0129 11:30:12.075618 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:30:14 crc kubenswrapper[4593]: I0129 11:30:14.344045 4593 scope.go:117] "RemoveContainer" containerID="81d2ae81ac7fd09960ec8dcecfdd7fb40c2612e8262393b7c2c13c07e2588b6b" Jan 29 11:30:25 crc kubenswrapper[4593]: I0129 11:30:25.081176 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:30:25 crc kubenswrapper[4593]: E0129 11:30:25.081930 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:30:30 crc kubenswrapper[4593]: I0129 11:30:30.053562 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:30:30 crc kubenswrapper[4593]: I0129 11:30:30.065852 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jfk6z"] Jan 29 11:30:31 crc kubenswrapper[4593]: I0129 11:30:31.086724 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc4cd76-a47d-4691-906f-d1617455f100" path="/var/lib/kubelet/pods/ecc4cd76-a47d-4691-906f-d1617455f100/volumes" Jan 29 11:30:40 crc kubenswrapper[4593]: I0129 11:30:40.074869 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:30:40 crc kubenswrapper[4593]: E0129 11:30:40.075877 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:30:42 crc kubenswrapper[4593]: I0129 11:30:42.035882 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:30:42 crc kubenswrapper[4593]: I0129 11:30:42.068180 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wc9fh"] Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.008599 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:30:43 crc kubenswrapper[4593]: E0129 11:30:43.009487 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3bb310-71b1-4d29-a302-e06181c04f5f" containerName="collect-profiles" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.009512 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3bb310-71b1-4d29-a302-e06181c04f5f" containerName="collect-profiles" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.009770 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3bb310-71b1-4d29-a302-e06181c04f5f" containerName="collect-profiles" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.011529 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.019381 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.083070 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.083146 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.083274 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.085603 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d30b0b-741b-4275-bcd3-65f27a294d54" path="/var/lib/kubelet/pods/c4d30b0b-741b-4275-bcd3-65f27a294d54/volumes" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.184392 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.184521 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.184574 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.184901 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.185275 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.217702 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") pod \"redhat-operators-82d5x\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:43 crc kubenswrapper[4593]: I0129 11:30:43.336001 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:30:44 crc kubenswrapper[4593]: I0129 11:30:44.135704 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:30:44 crc kubenswrapper[4593]: I0129 11:30:44.386812 4593 generic.go:334] "Generic (PLEG): container finished" podID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerID="5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db" exitCode=0 Jan 29 11:30:44 crc kubenswrapper[4593]: I0129 11:30:44.387024 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerDied","Data":"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db"} Jan 29 11:30:44 crc kubenswrapper[4593]: I0129 11:30:44.387113 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerStarted","Data":"a00c471bfe7ad5fb5e04c038f64e41f5f6ca0e1837c2dd3dfeed096385c3abac"} Jan 29 11:30:46 crc kubenswrapper[4593]: I0129 11:30:46.404903 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerStarted","Data":"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759"} Jan 29 11:30:52 crc kubenswrapper[4593]: I0129 11:30:52.455367 4593 generic.go:334] "Generic (PLEG): container finished" podID="80d7dd41-691a-4411-97c2-91245d43b8ea" containerID="58d92a6cf90bfa5b104f1ad9533044c99bc8076e9572dec59724d020f65d5b0d" exitCode=0 Jan 29 11:30:52 crc kubenswrapper[4593]: I0129 11:30:52.455454 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" event={"ID":"80d7dd41-691a-4411-97c2-91245d43b8ea","Type":"ContainerDied","Data":"58d92a6cf90bfa5b104f1ad9533044c99bc8076e9572dec59724d020f65d5b0d"} Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.075963 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:30:54 crc kubenswrapper[4593]: E0129 11:30:54.077712 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.123009 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.326743 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") pod \"80d7dd41-691a-4411-97c2-91245d43b8ea\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.327156 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") pod \"80d7dd41-691a-4411-97c2-91245d43b8ea\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.327454 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") pod \"80d7dd41-691a-4411-97c2-91245d43b8ea\" (UID: \"80d7dd41-691a-4411-97c2-91245d43b8ea\") " Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.338026 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9" (OuterVolumeSpecName: "kube-api-access-9sbm9") pod "80d7dd41-691a-4411-97c2-91245d43b8ea" (UID: "80d7dd41-691a-4411-97c2-91245d43b8ea"). InnerVolumeSpecName "kube-api-access-9sbm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.360702 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "80d7dd41-691a-4411-97c2-91245d43b8ea" (UID: "80d7dd41-691a-4411-97c2-91245d43b8ea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.377079 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory" (OuterVolumeSpecName: "inventory") pod "80d7dd41-691a-4411-97c2-91245d43b8ea" (UID: "80d7dd41-691a-4411-97c2-91245d43b8ea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.430452 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.430509 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sbm9\" (UniqueName: \"kubernetes.io/projected/80d7dd41-691a-4411-97c2-91245d43b8ea-kube-api-access-9sbm9\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.430529 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80d7dd41-691a-4411-97c2-91245d43b8ea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.478878 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" event={"ID":"80d7dd41-691a-4411-97c2-91245d43b8ea","Type":"ContainerDied","Data":"e32031e06aad254861bb54923223ee1752de351cad7516014ab280e7d0197bdf"} Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.478922 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32031e06aad254861bb54923223ee1752de351cad7516014ab280e7d0197bdf" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.478920 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-27mbg" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.590194 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p"] Jan 29 11:30:54 crc kubenswrapper[4593]: E0129 11:30:54.590589 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d7dd41-691a-4411-97c2-91245d43b8ea" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.590605 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d7dd41-691a-4411-97c2-91245d43b8ea" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.590850 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d7dd41-691a-4411-97c2-91245d43b8ea" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.592485 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.597423 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.598086 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.598086 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.601675 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.615762 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p"] Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.735452 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.735619 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.735791 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.837436 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.837554 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.837736 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.844363 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.851768 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.855833 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:54 crc kubenswrapper[4593]: I0129 11:30:54.912260 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:30:55 crc kubenswrapper[4593]: I0129 11:30:55.552515 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p"] Jan 29 11:30:56 crc kubenswrapper[4593]: I0129 11:30:56.500619 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" event={"ID":"0f5fb9be-3781-4b9a-96d8-705593907345","Type":"ContainerStarted","Data":"377ce67068eb512799c63a093c00caf7f33bcd4e9f3a083a6f4884d34e4e543d"} Jan 29 11:30:56 crc kubenswrapper[4593]: I0129 11:30:56.503733 4593 generic.go:334] "Generic (PLEG): container finished" podID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerID="c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759" exitCode=0 Jan 29 11:30:56 crc kubenswrapper[4593]: I0129 11:30:56.503789 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerDied","Data":"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759"} Jan 29 11:30:57 crc kubenswrapper[4593]: I0129 11:30:57.449893 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:30:58 crc kubenswrapper[4593]: I0129 11:30:58.532884 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerStarted","Data":"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f"} Jan 29 11:30:58 crc kubenswrapper[4593]: I0129 11:30:58.536738 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" event={"ID":"0f5fb9be-3781-4b9a-96d8-705593907345","Type":"ContainerStarted","Data":"48cd5db24f135f274647760a88e09cee1d55032bbbad248fe310a7bb592d3aca"} Jan 29 11:30:58 crc kubenswrapper[4593]: I0129 11:30:58.561807 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82d5x" podStartSLOduration=3.390233307 podStartE2EDuration="16.561789227s" podCreationTimestamp="2026-01-29 11:30:42 +0000 UTC" firstStartedPulling="2026-01-29 11:30:44.388864772 +0000 UTC m=+1910.261898963" lastFinishedPulling="2026-01-29 11:30:57.560420692 +0000 UTC m=+1923.433454883" observedRunningTime="2026-01-29 11:30:58.55822113 +0000 UTC m=+1924.431255321" watchObservedRunningTime="2026-01-29 11:30:58.561789227 +0000 UTC m=+1924.434823418" Jan 29 11:30:58 crc kubenswrapper[4593]: I0129 11:30:58.591882 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" podStartSLOduration=2.707972421 podStartE2EDuration="4.591853992s" podCreationTimestamp="2026-01-29 11:30:54 +0000 UTC" firstStartedPulling="2026-01-29 11:30:55.561912473 +0000 UTC m=+1921.434946664" lastFinishedPulling="2026-01-29 11:30:57.445794044 +0000 UTC m=+1923.318828235" observedRunningTime="2026-01-29 11:30:58.581479541 +0000 UTC m=+1924.454513732" watchObservedRunningTime="2026-01-29 11:30:58.591853992 +0000 UTC m=+1924.464888183" Jan 29 11:31:03 crc kubenswrapper[4593]: I0129 11:31:03.336154 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:03 crc kubenswrapper[4593]: I0129 11:31:03.337392 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:03 crc kubenswrapper[4593]: I0129 11:31:03.591348 4593 generic.go:334] "Generic (PLEG): container finished" podID="0f5fb9be-3781-4b9a-96d8-705593907345" containerID="48cd5db24f135f274647760a88e09cee1d55032bbbad248fe310a7bb592d3aca" exitCode=0 Jan 29 11:31:03 crc kubenswrapper[4593]: I0129 11:31:03.591425 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" event={"ID":"0f5fb9be-3781-4b9a-96d8-705593907345","Type":"ContainerDied","Data":"48cd5db24f135f274647760a88e09cee1d55032bbbad248fe310a7bb592d3aca"} Jan 29 11:31:04 crc kubenswrapper[4593]: I0129 11:31:04.389883 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" probeResult="failure" output=< Jan 29 11:31:04 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:31:04 crc kubenswrapper[4593]: > Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.023741 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.081507 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:05 crc kubenswrapper[4593]: E0129 11:31:05.081835 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.160098 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") pod \"0f5fb9be-3781-4b9a-96d8-705593907345\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.160377 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") pod \"0f5fb9be-3781-4b9a-96d8-705593907345\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.160434 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") pod \"0f5fb9be-3781-4b9a-96d8-705593907345\" (UID: \"0f5fb9be-3781-4b9a-96d8-705593907345\") " Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.169939 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq" (OuterVolumeSpecName: "kube-api-access-2kfqq") pod "0f5fb9be-3781-4b9a-96d8-705593907345" (UID: "0f5fb9be-3781-4b9a-96d8-705593907345"). InnerVolumeSpecName "kube-api-access-2kfqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.189485 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory" (OuterVolumeSpecName: "inventory") pod "0f5fb9be-3781-4b9a-96d8-705593907345" (UID: "0f5fb9be-3781-4b9a-96d8-705593907345"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.202259 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f5fb9be-3781-4b9a-96d8-705593907345" (UID: "0f5fb9be-3781-4b9a-96d8-705593907345"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.263532 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.263570 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kfqq\" (UniqueName: \"kubernetes.io/projected/0f5fb9be-3781-4b9a-96d8-705593907345-kube-api-access-2kfqq\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.263586 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f5fb9be-3781-4b9a-96d8-705593907345-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.611025 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" event={"ID":"0f5fb9be-3781-4b9a-96d8-705593907345","Type":"ContainerDied","Data":"377ce67068eb512799c63a093c00caf7f33bcd4e9f3a083a6f4884d34e4e543d"} Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.611071 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.611087 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377ce67068eb512799c63a093c00caf7f33bcd4e9f3a083a6f4884d34e4e543d" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.737069 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88"] Jan 29 11:31:05 crc kubenswrapper[4593]: E0129 11:31:05.737510 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5fb9be-3781-4b9a-96d8-705593907345" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.737533 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5fb9be-3781-4b9a-96d8-705593907345" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.737915 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5fb9be-3781-4b9a-96d8-705593907345" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.738599 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.741503 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.743228 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.743394 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.743803 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.745446 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88"] Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.873860 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.874186 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.874229 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.975703 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.975751 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.976678 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.980314 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:05 crc kubenswrapper[4593]: I0129 11:31:05.981460 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:06 crc kubenswrapper[4593]: I0129 11:31:06.001936 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-p4f88\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:06 crc kubenswrapper[4593]: I0129 11:31:06.078656 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:06 crc kubenswrapper[4593]: I0129 11:31:06.649228 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88"] Jan 29 11:31:06 crc kubenswrapper[4593]: W0129 11:31:06.657915 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62d982c9_eb7a_4d9d_9cdd_2248c63b06fb.slice/crio-8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5 WatchSource:0}: Error finding container 8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5: Status 404 returned error can't find the container with id 8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5 Jan 29 11:31:07 crc kubenswrapper[4593]: I0129 11:31:07.628719 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" event={"ID":"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb","Type":"ContainerStarted","Data":"8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5"} Jan 29 11:31:08 crc kubenswrapper[4593]: I0129 11:31:08.639972 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" event={"ID":"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb","Type":"ContainerStarted","Data":"a834152221954d7f1ac3964aed5ebfdb5eb1ef9d8e56af1cff55ac1b4ff20571"} Jan 29 11:31:08 crc kubenswrapper[4593]: I0129 11:31:08.663878 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" podStartSLOduration=2.051789053 podStartE2EDuration="3.663858584s" podCreationTimestamp="2026-01-29 11:31:05 +0000 UTC" firstStartedPulling="2026-01-29 11:31:06.660158216 +0000 UTC m=+1932.533192407" lastFinishedPulling="2026-01-29 11:31:08.272227747 +0000 UTC m=+1934.145261938" observedRunningTime="2026-01-29 11:31:08.661557132 +0000 UTC m=+1934.534591343" watchObservedRunningTime="2026-01-29 11:31:08.663858584 +0000 UTC m=+1934.536892775" Jan 29 11:31:13 crc kubenswrapper[4593]: I0129 11:31:13.048680 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:31:13 crc kubenswrapper[4593]: I0129 11:31:13.059471 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4klpz"] Jan 29 11:31:13 crc kubenswrapper[4593]: I0129 11:31:13.087268 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f1974c-39c2-48ab-96f4-ad9b138bdd2a" path="/var/lib/kubelet/pods/39f1974c-39c2-48ab-96f4-ad9b138bdd2a/volumes" Jan 29 11:31:14 crc kubenswrapper[4593]: I0129 11:31:14.397540 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" probeResult="failure" output=< Jan 29 11:31:14 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:31:14 crc kubenswrapper[4593]: > Jan 29 11:31:14 crc kubenswrapper[4593]: I0129 11:31:14.464737 4593 scope.go:117] "RemoveContainer" containerID="becc277c4dab17e63d11203d4fe1da3af35724523a182bc72abe031b3a628c8a" Jan 29 11:31:14 crc kubenswrapper[4593]: I0129 11:31:14.545529 4593 scope.go:117] "RemoveContainer" containerID="96bdd94d7fe01d27f9002652fb0e024d5e4216b747eecd5f1013e14f7c20a7f7" Jan 29 11:31:14 crc kubenswrapper[4593]: I0129 11:31:14.604427 4593 scope.go:117] "RemoveContainer" containerID="1ea0d35aaa814eafe90d3b552ce2cc9ecd1b47dc4d9629fa6b4ad38749d52cc1" Jan 29 11:31:17 crc kubenswrapper[4593]: I0129 11:31:17.075478 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:17 crc kubenswrapper[4593]: E0129 11:31:17.076263 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:31:24 crc kubenswrapper[4593]: I0129 11:31:24.380591 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" probeResult="failure" output=< Jan 29 11:31:24 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:31:24 crc kubenswrapper[4593]: > Jan 29 11:31:32 crc kubenswrapper[4593]: I0129 11:31:32.075262 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:32 crc kubenswrapper[4593]: E0129 11:31:32.076062 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:31:34 crc kubenswrapper[4593]: I0129 11:31:34.397188 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" probeResult="failure" output=< Jan 29 11:31:34 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:31:34 crc kubenswrapper[4593]: > Jan 29 11:31:43 crc kubenswrapper[4593]: I0129 11:31:43.407708 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:43 crc kubenswrapper[4593]: I0129 11:31:43.463999 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:43 crc kubenswrapper[4593]: I0129 11:31:43.646543 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:31:44 crc kubenswrapper[4593]: I0129 11:31:44.075622 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:44 crc kubenswrapper[4593]: E0129 11:31:44.075909 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:31:44 crc kubenswrapper[4593]: I0129 11:31:44.948831 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82d5x" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" containerID="cri-o://e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" gracePeriod=2 Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.412018 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.535285 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") pod \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.535534 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") pod \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.535718 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") pod \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\" (UID: \"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4\") " Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.537284 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities" (OuterVolumeSpecName: "utilities") pod "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" (UID: "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.544876 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4" (OuterVolumeSpecName: "kube-api-access-d68k4") pod "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" (UID: "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4"). InnerVolumeSpecName "kube-api-access-d68k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.639667 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d68k4\" (UniqueName: \"kubernetes.io/projected/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-kube-api-access-d68k4\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.639943 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.673256 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" (UID: "2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.742064 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.959902 4593 generic.go:334] "Generic (PLEG): container finished" podID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerID="e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" exitCode=0 Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.959962 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerDied","Data":"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f"} Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.960000 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82d5x" event={"ID":"2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4","Type":"ContainerDied","Data":"a00c471bfe7ad5fb5e04c038f64e41f5f6ca0e1837c2dd3dfeed096385c3abac"} Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.960030 4593 scope.go:117] "RemoveContainer" containerID="e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" Jan 29 11:31:45 crc kubenswrapper[4593]: I0129 11:31:45.960031 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82d5x" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.000782 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.002097 4593 scope.go:117] "RemoveContainer" containerID="c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.010938 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82d5x"] Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.039111 4593 scope.go:117] "RemoveContainer" containerID="5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.073105 4593 scope.go:117] "RemoveContainer" containerID="e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" Jan 29 11:31:46 crc kubenswrapper[4593]: E0129 11:31:46.073556 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f\": container with ID starting with e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f not found: ID does not exist" containerID="e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.073678 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f"} err="failed to get container status \"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f\": rpc error: code = NotFound desc = could not find container \"e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f\": container with ID starting with e0ba56a13f861a40db48db95fd5e2b9c4559954f38b81bb8acac7288653cf17f not found: ID does not exist" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.073779 4593 scope.go:117] "RemoveContainer" containerID="c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759" Jan 29 11:31:46 crc kubenswrapper[4593]: E0129 11:31:46.075235 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759\": container with ID starting with c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759 not found: ID does not exist" containerID="c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.075270 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759"} err="failed to get container status \"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759\": rpc error: code = NotFound desc = could not find container \"c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759\": container with ID starting with c8f55144c410fc0c73e18e7e79503904faa947680d4d49769f69371b1ac60759 not found: ID does not exist" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.075292 4593 scope.go:117] "RemoveContainer" containerID="5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db" Jan 29 11:31:46 crc kubenswrapper[4593]: E0129 11:31:46.075866 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db\": container with ID starting with 5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db not found: ID does not exist" containerID="5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db" Jan 29 11:31:46 crc kubenswrapper[4593]: I0129 11:31:46.075893 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db"} err="failed to get container status \"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db\": rpc error: code = NotFound desc = could not find container \"5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db\": container with ID starting with 5851c2e92ef970071e21e6f7ee7488e1b52c34d93589113c2dbf8c2bd01fe8db not found: ID does not exist" Jan 29 11:31:47 crc kubenswrapper[4593]: I0129 11:31:47.088450 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" path="/var/lib/kubelet/pods/2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4/volumes" Jan 29 11:31:50 crc kubenswrapper[4593]: I0129 11:31:50.002261 4593 generic.go:334] "Generic (PLEG): container finished" podID="62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" containerID="a834152221954d7f1ac3964aed5ebfdb5eb1ef9d8e56af1cff55ac1b4ff20571" exitCode=0 Jan 29 11:31:50 crc kubenswrapper[4593]: I0129 11:31:50.002342 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" event={"ID":"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb","Type":"ContainerDied","Data":"a834152221954d7f1ac3964aed5ebfdb5eb1ef9d8e56af1cff55ac1b4ff20571"} Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.433420 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.584201 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") pod \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.584621 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") pod \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.585441 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") pod \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\" (UID: \"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb\") " Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.607960 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k" (OuterVolumeSpecName: "kube-api-access-7sb7k") pod "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" (UID: "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb"). InnerVolumeSpecName "kube-api-access-7sb7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.620116 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" (UID: "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.621538 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory" (OuterVolumeSpecName: "inventory") pod "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" (UID: "62d982c9-eb7a-4d9d-9cdd-2248c63b06fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.687452 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sb7k\" (UniqueName: \"kubernetes.io/projected/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-kube-api-access-7sb7k\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.687496 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:51 crc kubenswrapper[4593]: I0129 11:31:51.687507 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/62d982c9-eb7a-4d9d-9cdd-2248c63b06fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.024850 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" event={"ID":"62d982c9-eb7a-4d9d-9cdd-2248c63b06fb","Type":"ContainerDied","Data":"8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5"} Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.024884 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d5b92283fb5060ef6b06aeb3e80b8769e5866836b6a5ae333ba6bf6faa250d5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.025297 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-p4f88" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234291 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5"] Jan 29 11:31:52 crc kubenswrapper[4593]: E0129 11:31:52.234818 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="extract-utilities" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234838 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="extract-utilities" Jan 29 11:31:52 crc kubenswrapper[4593]: E0129 11:31:52.234850 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234857 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" Jan 29 11:31:52 crc kubenswrapper[4593]: E0129 11:31:52.234885 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234893 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:52 crc kubenswrapper[4593]: E0129 11:31:52.234902 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="extract-content" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.234908 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="extract-content" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.235104 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b28f9e8-4b88-4a40-9841-6ff92ef1e0d4" containerName="registry-server" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.235124 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d982c9-eb7a-4d9d-9cdd-2248c63b06fb" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.235773 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.239088 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.239424 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.239585 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.242317 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.260129 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5"] Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.403148 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.403241 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.403440 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.505680 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.505763 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.505873 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.525115 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.529413 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.542737 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:52 crc kubenswrapper[4593]: I0129 11:31:52.560460 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:31:53 crc kubenswrapper[4593]: I0129 11:31:53.150303 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5"] Jan 29 11:31:54 crc kubenswrapper[4593]: I0129 11:31:54.050123 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" event={"ID":"83fa3cd4-ce6a-44bb-b652-c783504941f9","Type":"ContainerStarted","Data":"f6653ebeeff453ab657fe873f5506c2d5b9c531126438ca29b0e219b1ac1b699"} Jan 29 11:31:55 crc kubenswrapper[4593]: I0129 11:31:55.062616 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" event={"ID":"83fa3cd4-ce6a-44bb-b652-c783504941f9","Type":"ContainerStarted","Data":"00574ec0eb21e974d0ee0f68191e26342a0c84daa7fa9850d309f82ed1b27a97"} Jan 29 11:31:55 crc kubenswrapper[4593]: I0129 11:31:55.105810 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" podStartSLOduration=2.024047303 podStartE2EDuration="3.105788767s" podCreationTimestamp="2026-01-29 11:31:52 +0000 UTC" firstStartedPulling="2026-01-29 11:31:53.163087484 +0000 UTC m=+1979.036121675" lastFinishedPulling="2026-01-29 11:31:54.244828928 +0000 UTC m=+1980.117863139" observedRunningTime="2026-01-29 11:31:55.095954501 +0000 UTC m=+1980.968988692" watchObservedRunningTime="2026-01-29 11:31:55.105788767 +0000 UTC m=+1980.978822948" Jan 29 11:31:57 crc kubenswrapper[4593]: I0129 11:31:57.074908 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:31:57 crc kubenswrapper[4593]: E0129 11:31:57.075553 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:32:11 crc kubenswrapper[4593]: I0129 11:32:11.075869 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:32:12 crc kubenswrapper[4593]: I0129 11:32:12.225227 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972"} Jan 29 11:32:46 crc kubenswrapper[4593]: I0129 11:32:46.515936 4593 generic.go:334] "Generic (PLEG): container finished" podID="83fa3cd4-ce6a-44bb-b652-c783504941f9" containerID="00574ec0eb21e974d0ee0f68191e26342a0c84daa7fa9850d309f82ed1b27a97" exitCode=0 Jan 29 11:32:46 crc kubenswrapper[4593]: I0129 11:32:46.516014 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" event={"ID":"83fa3cd4-ce6a-44bb-b652-c783504941f9","Type":"ContainerDied","Data":"00574ec0eb21e974d0ee0f68191e26342a0c84daa7fa9850d309f82ed1b27a97"} Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.682482 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.831534 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") pod \"83fa3cd4-ce6a-44bb-b652-c783504941f9\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.831867 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") pod \"83fa3cd4-ce6a-44bb-b652-c783504941f9\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.831988 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") pod \"83fa3cd4-ce6a-44bb-b652-c783504941f9\" (UID: \"83fa3cd4-ce6a-44bb-b652-c783504941f9\") " Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.840556 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x" (OuterVolumeSpecName: "kube-api-access-cmf7x") pod "83fa3cd4-ce6a-44bb-b652-c783504941f9" (UID: "83fa3cd4-ce6a-44bb-b652-c783504941f9"). InnerVolumeSpecName "kube-api-access-cmf7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.869819 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory" (OuterVolumeSpecName: "inventory") pod "83fa3cd4-ce6a-44bb-b652-c783504941f9" (UID: "83fa3cd4-ce6a-44bb-b652-c783504941f9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.875888 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83fa3cd4-ce6a-44bb-b652-c783504941f9" (UID: "83fa3cd4-ce6a-44bb-b652-c783504941f9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.934743 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmf7x\" (UniqueName: \"kubernetes.io/projected/83fa3cd4-ce6a-44bb-b652-c783504941f9-kube-api-access-cmf7x\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.934790 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:48 crc kubenswrapper[4593]: I0129 11:32:48.934804 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/83fa3cd4-ce6a-44bb-b652-c783504941f9-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.337153 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cfk97"] Jan 29 11:32:49 crc kubenswrapper[4593]: E0129 11:32:49.339465 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fa3cd4-ce6a-44bb-b652-c783504941f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.339608 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fa3cd4-ce6a-44bb-b652-c783504941f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.339973 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fa3cd4-ce6a-44bb-b652-c783504941f9" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.340726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.352329 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cfk97"] Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.443539 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.443689 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.443762 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.544918 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.545002 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.545104 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.551430 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.552319 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.554584 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" event={"ID":"83fa3cd4-ce6a-44bb-b652-c783504941f9","Type":"ContainerDied","Data":"f6653ebeeff453ab657fe873f5506c2d5b9c531126438ca29b0e219b1ac1b699"} Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.554617 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6653ebeeff453ab657fe873f5506c2d5b9c531126438ca29b0e219b1ac1b699" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.554684 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.564322 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") pod \"ssh-known-hosts-edpm-deployment-cfk97\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:49 crc kubenswrapper[4593]: I0129 11:32:49.666235 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:50 crc kubenswrapper[4593]: I0129 11:32:50.206163 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-cfk97"] Jan 29 11:32:50 crc kubenswrapper[4593]: I0129 11:32:50.569605 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" event={"ID":"c22e1d76-6585-46e2-9c31-5c002e021882","Type":"ContainerStarted","Data":"075b8459fc88b5c9f61f00148c508a0e3bb632f0c9eb6956820e3ab0c4348252"} Jan 29 11:32:51 crc kubenswrapper[4593]: I0129 11:32:51.638764 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" event={"ID":"c22e1d76-6585-46e2-9c31-5c002e021882","Type":"ContainerStarted","Data":"0cdfedabb2cb51565fe633b2201e57d5c189e9bb0541113dc3ec3fce82165e56"} Jan 29 11:32:51 crc kubenswrapper[4593]: I0129 11:32:51.672756 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" podStartSLOduration=1.93452673 podStartE2EDuration="2.672729342s" podCreationTimestamp="2026-01-29 11:32:49 +0000 UTC" firstStartedPulling="2026-01-29 11:32:50.206712099 +0000 UTC m=+2036.079746320" lastFinishedPulling="2026-01-29 11:32:50.944914741 +0000 UTC m=+2036.817948932" observedRunningTime="2026-01-29 11:32:51.660411117 +0000 UTC m=+2037.533445318" watchObservedRunningTime="2026-01-29 11:32:51.672729342 +0000 UTC m=+2037.545763543" Jan 29 11:32:57 crc kubenswrapper[4593]: I0129 11:32:57.696693 4593 generic.go:334] "Generic (PLEG): container finished" podID="c22e1d76-6585-46e2-9c31-5c002e021882" containerID="0cdfedabb2cb51565fe633b2201e57d5c189e9bb0541113dc3ec3fce82165e56" exitCode=0 Jan 29 11:32:57 crc kubenswrapper[4593]: I0129 11:32:57.696919 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" event={"ID":"c22e1d76-6585-46e2-9c31-5c002e021882","Type":"ContainerDied","Data":"0cdfedabb2cb51565fe633b2201e57d5c189e9bb0541113dc3ec3fce82165e56"} Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.273828 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.427173 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") pod \"c22e1d76-6585-46e2-9c31-5c002e021882\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.427437 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") pod \"c22e1d76-6585-46e2-9c31-5c002e021882\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.427624 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") pod \"c22e1d76-6585-46e2-9c31-5c002e021882\" (UID: \"c22e1d76-6585-46e2-9c31-5c002e021882\") " Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.447026 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl" (OuterVolumeSpecName: "kube-api-access-jrqtl") pod "c22e1d76-6585-46e2-9c31-5c002e021882" (UID: "c22e1d76-6585-46e2-9c31-5c002e021882"). InnerVolumeSpecName "kube-api-access-jrqtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.458243 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c22e1d76-6585-46e2-9c31-5c002e021882" (UID: "c22e1d76-6585-46e2-9c31-5c002e021882"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.462257 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c22e1d76-6585-46e2-9c31-5c002e021882" (UID: "c22e1d76-6585-46e2-9c31-5c002e021882"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.529741 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.529783 4593 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c22e1d76-6585-46e2-9c31-5c002e021882-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.529796 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrqtl\" (UniqueName: \"kubernetes.io/projected/c22e1d76-6585-46e2-9c31-5c002e021882-kube-api-access-jrqtl\") on node \"crc\" DevicePath \"\"" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.722415 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" event={"ID":"c22e1d76-6585-46e2-9c31-5c002e021882","Type":"ContainerDied","Data":"075b8459fc88b5c9f61f00148c508a0e3bb632f0c9eb6956820e3ab0c4348252"} Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.722488 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075b8459fc88b5c9f61f00148c508a0e3bb632f0c9eb6956820e3ab0c4348252" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.722572 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-cfk97" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.818483 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t"] Jan 29 11:32:59 crc kubenswrapper[4593]: E0129 11:32:59.819385 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22e1d76-6585-46e2-9c31-5c002e021882" containerName="ssh-known-hosts-edpm-deployment" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.819417 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22e1d76-6585-46e2-9c31-5c002e021882" containerName="ssh-known-hosts-edpm-deployment" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.819782 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22e1d76-6585-46e2-9c31-5c002e021882" containerName="ssh-known-hosts-edpm-deployment" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.821027 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.826546 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.827308 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.829979 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.830900 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.848356 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t"] Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.945248 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.945351 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:32:59 crc kubenswrapper[4593]: I0129 11:32:59.945454 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.047049 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.047150 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.047204 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.051345 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.053481 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.070036 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lz46t\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.144908 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:00 crc kubenswrapper[4593]: I0129 11:33:00.747172 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t"] Jan 29 11:33:01 crc kubenswrapper[4593]: I0129 11:33:01.748024 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" event={"ID":"b1f286ec-6f85-44c4-94f5-f66bc21c2a64","Type":"ContainerStarted","Data":"12b72897b5f5d11caf6ec17f7553c3a6ceba03b6a70dd8696ec59dda1c8487cb"} Jan 29 11:33:01 crc kubenswrapper[4593]: I0129 11:33:01.748578 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" event={"ID":"b1f286ec-6f85-44c4-94f5-f66bc21c2a64","Type":"ContainerStarted","Data":"946f49a462d783d56d9cb7915ab170aea3fa4354acdbbab852861c916716c3a4"} Jan 29 11:33:01 crc kubenswrapper[4593]: I0129 11:33:01.793437 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" podStartSLOduration=2.205929375 podStartE2EDuration="2.793390831s" podCreationTimestamp="2026-01-29 11:32:59 +0000 UTC" firstStartedPulling="2026-01-29 11:33:00.740319093 +0000 UTC m=+2046.613353294" lastFinishedPulling="2026-01-29 11:33:01.327780559 +0000 UTC m=+2047.200814750" observedRunningTime="2026-01-29 11:33:01.784190442 +0000 UTC m=+2047.657224633" watchObservedRunningTime="2026-01-29 11:33:01.793390831 +0000 UTC m=+2047.666425032" Jan 29 11:33:09 crc kubenswrapper[4593]: I0129 11:33:09.833433 4593 generic.go:334] "Generic (PLEG): container finished" podID="b1f286ec-6f85-44c4-94f5-f66bc21c2a64" containerID="12b72897b5f5d11caf6ec17f7553c3a6ceba03b6a70dd8696ec59dda1c8487cb" exitCode=0 Jan 29 11:33:09 crc kubenswrapper[4593]: I0129 11:33:09.833513 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" event={"ID":"b1f286ec-6f85-44c4-94f5-f66bc21c2a64","Type":"ContainerDied","Data":"12b72897b5f5d11caf6ec17f7553c3a6ceba03b6a70dd8696ec59dda1c8487cb"} Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.235433 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.326494 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") pod \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.327583 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") pod \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.327913 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") pod \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\" (UID: \"b1f286ec-6f85-44c4-94f5-f66bc21c2a64\") " Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.338239 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl" (OuterVolumeSpecName: "kube-api-access-jnhhl") pod "b1f286ec-6f85-44c4-94f5-f66bc21c2a64" (UID: "b1f286ec-6f85-44c4-94f5-f66bc21c2a64"). InnerVolumeSpecName "kube-api-access-jnhhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.362107 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b1f286ec-6f85-44c4-94f5-f66bc21c2a64" (UID: "b1f286ec-6f85-44c4-94f5-f66bc21c2a64"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.367355 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory" (OuterVolumeSpecName: "inventory") pod "b1f286ec-6f85-44c4-94f5-f66bc21c2a64" (UID: "b1f286ec-6f85-44c4-94f5-f66bc21c2a64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.431115 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.431334 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.431440 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnhhl\" (UniqueName: \"kubernetes.io/projected/b1f286ec-6f85-44c4-94f5-f66bc21c2a64-kube-api-access-jnhhl\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.850310 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" event={"ID":"b1f286ec-6f85-44c4-94f5-f66bc21c2a64","Type":"ContainerDied","Data":"946f49a462d783d56d9cb7915ab170aea3fa4354acdbbab852861c916716c3a4"} Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.850376 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="946f49a462d783d56d9cb7915ab170aea3fa4354acdbbab852861c916716c3a4" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.850681 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lz46t" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.940376 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44"] Jan 29 11:33:11 crc kubenswrapper[4593]: E0129 11:33:11.941288 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f286ec-6f85-44c4-94f5-f66bc21c2a64" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.941392 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f286ec-6f85-44c4-94f5-f66bc21c2a64" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.941749 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f286ec-6f85-44c4-94f5-f66bc21c2a64" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.942702 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.945897 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.946035 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.946610 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.947769 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:33:11 crc kubenswrapper[4593]: I0129 11:33:11.952329 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44"] Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.042099 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.042444 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.042724 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.144522 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.144618 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.144657 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.148866 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.153460 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.164959 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-jps44\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.264355 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:12 crc kubenswrapper[4593]: I0129 11:33:12.907761 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44"] Jan 29 11:33:13 crc kubenswrapper[4593]: I0129 11:33:13.876350 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" event={"ID":"9a263e61-6654-4030-bd96-c1baa9314111","Type":"ContainerStarted","Data":"c4c21af487b9c0edc57b286f105bf2a456629dead664ba5178ff2d6c7a314a0c"} Jan 29 11:33:13 crc kubenswrapper[4593]: I0129 11:33:13.876722 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" event={"ID":"9a263e61-6654-4030-bd96-c1baa9314111","Type":"ContainerStarted","Data":"d5a36adf4791937de8999978c5b33642cd27043f6bf0df4cfd53332f0acfd5ea"} Jan 29 11:33:22 crc kubenswrapper[4593]: I0129 11:33:22.968618 4593 generic.go:334] "Generic (PLEG): container finished" podID="9a263e61-6654-4030-bd96-c1baa9314111" containerID="c4c21af487b9c0edc57b286f105bf2a456629dead664ba5178ff2d6c7a314a0c" exitCode=0 Jan 29 11:33:22 crc kubenswrapper[4593]: I0129 11:33:22.968819 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" event={"ID":"9a263e61-6654-4030-bd96-c1baa9314111","Type":"ContainerDied","Data":"c4c21af487b9c0edc57b286f105bf2a456629dead664ba5178ff2d6c7a314a0c"} Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.532126 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.591159 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") pod \"9a263e61-6654-4030-bd96-c1baa9314111\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.591681 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") pod \"9a263e61-6654-4030-bd96-c1baa9314111\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.591846 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") pod \"9a263e61-6654-4030-bd96-c1baa9314111\" (UID: \"9a263e61-6654-4030-bd96-c1baa9314111\") " Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.598701 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d" (OuterVolumeSpecName: "kube-api-access-2dp7d") pod "9a263e61-6654-4030-bd96-c1baa9314111" (UID: "9a263e61-6654-4030-bd96-c1baa9314111"). InnerVolumeSpecName "kube-api-access-2dp7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.625194 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a263e61-6654-4030-bd96-c1baa9314111" (UID: "9a263e61-6654-4030-bd96-c1baa9314111"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.629542 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory" (OuterVolumeSpecName: "inventory") pod "9a263e61-6654-4030-bd96-c1baa9314111" (UID: "9a263e61-6654-4030-bd96-c1baa9314111"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.693955 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.694158 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dp7d\" (UniqueName: \"kubernetes.io/projected/9a263e61-6654-4030-bd96-c1baa9314111-kube-api-access-2dp7d\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.694216 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a263e61-6654-4030-bd96-c1baa9314111-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.991130 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" event={"ID":"9a263e61-6654-4030-bd96-c1baa9314111","Type":"ContainerDied","Data":"d5a36adf4791937de8999978c5b33642cd27043f6bf0df4cfd53332f0acfd5ea"} Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.991444 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5a36adf4791937de8999978c5b33642cd27043f6bf0df4cfd53332f0acfd5ea" Jan 29 11:33:24 crc kubenswrapper[4593]: I0129 11:33:24.991180 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-jps44" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.135511 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68"] Jan 29 11:33:25 crc kubenswrapper[4593]: E0129 11:33:25.136031 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a263e61-6654-4030-bd96-c1baa9314111" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.136058 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a263e61-6654-4030-bd96-c1baa9314111" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.136310 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a263e61-6654-4030-bd96-c1baa9314111" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.138705 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.143353 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.143823 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144468 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144587 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144511 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144687 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.144723 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.146008 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.155553 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68"] Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308116 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308181 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308237 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308275 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308465 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308534 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308719 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308855 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.308926 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309005 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309054 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309082 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309110 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.309133 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410629 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410754 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410788 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410849 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410899 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410930 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.410968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411783 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411833 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411863 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411907 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.411989 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.412035 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.415015 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.415488 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.416315 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.416343 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.417137 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.419421 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.420489 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.421405 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.421456 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.422929 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.431453 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.431708 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.433868 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.437112 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-x2n68\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:25 crc kubenswrapper[4593]: I0129 11:33:25.506136 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:33:26 crc kubenswrapper[4593]: I0129 11:33:26.023518 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68"] Jan 29 11:33:27 crc kubenswrapper[4593]: I0129 11:33:27.012871 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" event={"ID":"0418390b-7622-490c-ad95-ec5eac075440","Type":"ContainerStarted","Data":"9596acadaeeeff307f766346fb427baede4f5c2973b3737c1943c3387e09ddb5"} Jan 29 11:33:27 crc kubenswrapper[4593]: I0129 11:33:27.013420 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" event={"ID":"0418390b-7622-490c-ad95-ec5eac075440","Type":"ContainerStarted","Data":"ec5d29b14d53bd5f62869f75adcc252c43d91c395941b786e46c53db56831c57"} Jan 29 11:33:27 crc kubenswrapper[4593]: I0129 11:33:27.033743 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" podStartSLOduration=1.6023032050000001 podStartE2EDuration="2.03370377s" podCreationTimestamp="2026-01-29 11:33:25 +0000 UTC" firstStartedPulling="2026-01-29 11:33:26.033178597 +0000 UTC m=+2071.906212788" lastFinishedPulling="2026-01-29 11:33:26.464579162 +0000 UTC m=+2072.337613353" observedRunningTime="2026-01-29 11:33:27.029256079 +0000 UTC m=+2072.902290290" watchObservedRunningTime="2026-01-29 11:33:27.03370377 +0000 UTC m=+2072.906737981" Jan 29 11:34:03 crc kubenswrapper[4593]: I0129 11:34:03.406868 4593 generic.go:334] "Generic (PLEG): container finished" podID="0418390b-7622-490c-ad95-ec5eac075440" containerID="9596acadaeeeff307f766346fb427baede4f5c2973b3737c1943c3387e09ddb5" exitCode=0 Jan 29 11:34:03 crc kubenswrapper[4593]: I0129 11:34:03.407082 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" event={"ID":"0418390b-7622-490c-ad95-ec5eac075440","Type":"ContainerDied","Data":"9596acadaeeeff307f766346fb427baede4f5c2973b3737c1943c3387e09ddb5"} Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.860088 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906663 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906711 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906739 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906762 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906804 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906837 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906883 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906919 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906957 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906975 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.906992 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.907043 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.907063 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") pod \"0418390b-7622-490c-ad95-ec5eac075440\" (UID: \"0418390b-7622-490c-ad95-ec5eac075440\") " Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.916993 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917137 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk" (OuterVolumeSpecName: "kube-api-access-q89hk") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "kube-api-access-q89hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917190 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917231 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917458 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.917948 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.918963 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.920785 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.925273 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.927340 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.928771 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.938105 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.952326 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:04 crc kubenswrapper[4593]: I0129 11:34:04.961204 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory" (OuterVolumeSpecName: "inventory") pod "0418390b-7622-490c-ad95-ec5eac075440" (UID: "0418390b-7622-490c-ad95-ec5eac075440"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010123 4593 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010356 4593 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010505 4593 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010605 4593 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010723 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010822 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q89hk\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-kube-api-access-q89hk\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.010932 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011028 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011208 4593 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011316 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011416 4593 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011567 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011701 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/0418390b-7622-490c-ad95-ec5eac075440-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.011819 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0418390b-7622-490c-ad95-ec5eac075440-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.432396 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" event={"ID":"0418390b-7622-490c-ad95-ec5eac075440","Type":"ContainerDied","Data":"ec5d29b14d53bd5f62869f75adcc252c43d91c395941b786e46c53db56831c57"} Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.432694 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec5d29b14d53bd5f62869f75adcc252c43d91c395941b786e46c53db56831c57" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.432469 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-x2n68" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.559687 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl"] Jan 29 11:34:05 crc kubenswrapper[4593]: E0129 11:34:05.560377 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0418390b-7622-490c-ad95-ec5eac075440" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.560503 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0418390b-7622-490c-ad95-ec5eac075440" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.560850 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0418390b-7622-490c-ad95-ec5eac075440" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.561656 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.564916 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.567762 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.568210 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.568444 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.568876 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.585129 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl"] Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.621916 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.621988 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.622013 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.622069 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.622122 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723557 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723715 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723755 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723858 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.723951 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.725403 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.731057 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.732036 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.732821 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.755655 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ftxjl\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:05 crc kubenswrapper[4593]: I0129 11:34:05.883585 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:34:06 crc kubenswrapper[4593]: I0129 11:34:06.486056 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl"] Jan 29 11:34:07 crc kubenswrapper[4593]: I0129 11:34:07.457556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" event={"ID":"80db2d7c-94e6-418b-a0b4-2b4064356e4b","Type":"ContainerStarted","Data":"3d1b42f49400161b1d8c95796bd799e62ffe6e307b7fcee26199ead4efaeeb5f"} Jan 29 11:34:07 crc kubenswrapper[4593]: I0129 11:34:07.457973 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" event={"ID":"80db2d7c-94e6-418b-a0b4-2b4064356e4b","Type":"ContainerStarted","Data":"7b36f3307cde3252ef687db46ed25297713e29f6036f5d4211d41f1c07171c14"} Jan 29 11:34:07 crc kubenswrapper[4593]: I0129 11:34:07.481410 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" podStartSLOduration=2.041449705 podStartE2EDuration="2.481372517s" podCreationTimestamp="2026-01-29 11:34:05 +0000 UTC" firstStartedPulling="2026-01-29 11:34:06.483143997 +0000 UTC m=+2112.356178188" lastFinishedPulling="2026-01-29 11:34:06.923066809 +0000 UTC m=+2112.796101000" observedRunningTime="2026-01-29 11:34:07.477738569 +0000 UTC m=+2113.350772790" watchObservedRunningTime="2026-01-29 11:34:07.481372517 +0000 UTC m=+2113.354406728" Jan 29 11:34:33 crc kubenswrapper[4593]: I0129 11:34:33.946548 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:34:33 crc kubenswrapper[4593]: I0129 11:34:33.947209 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.032944 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.035570 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.063816 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.160930 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.161171 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.161800 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.263603 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.264001 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.264074 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.264262 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.264530 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.302187 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") pod \"community-operators-5nlmk\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.374913 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:34:57 crc kubenswrapper[4593]: I0129 11:34:57.980272 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:34:57 crc kubenswrapper[4593]: W0129 11:34:57.993643 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c4381e_f9c8_4453_8680_3ee5fab8d1f2.slice/crio-f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e WatchSource:0}: Error finding container f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e: Status 404 returned error can't find the container with id f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e Jan 29 11:34:59 crc kubenswrapper[4593]: I0129 11:34:59.010975 4593 generic.go:334] "Generic (PLEG): container finished" podID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerID="7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff" exitCode=0 Jan 29 11:34:59 crc kubenswrapper[4593]: I0129 11:34:59.011101 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerDied","Data":"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff"} Jan 29 11:34:59 crc kubenswrapper[4593]: I0129 11:34:59.011361 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerStarted","Data":"f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e"} Jan 29 11:34:59 crc kubenswrapper[4593]: I0129 11:34:59.013130 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:35:01 crc kubenswrapper[4593]: I0129 11:35:01.028851 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerStarted","Data":"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711"} Jan 29 11:35:03 crc kubenswrapper[4593]: I0129 11:35:03.946055 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:35:03 crc kubenswrapper[4593]: I0129 11:35:03.946652 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:35:04 crc kubenswrapper[4593]: I0129 11:35:04.063507 4593 generic.go:334] "Generic (PLEG): container finished" podID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerID="74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711" exitCode=0 Jan 29 11:35:04 crc kubenswrapper[4593]: I0129 11:35:04.063568 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerDied","Data":"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711"} Jan 29 11:35:05 crc kubenswrapper[4593]: I0129 11:35:05.086392 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerStarted","Data":"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e"} Jan 29 11:35:05 crc kubenswrapper[4593]: I0129 11:35:05.106358 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5nlmk" podStartSLOduration=2.400794017 podStartE2EDuration="8.106338155s" podCreationTimestamp="2026-01-29 11:34:57 +0000 UTC" firstStartedPulling="2026-01-29 11:34:59.012861445 +0000 UTC m=+2164.885895636" lastFinishedPulling="2026-01-29 11:35:04.718405563 +0000 UTC m=+2170.591439774" observedRunningTime="2026-01-29 11:35:05.106317864 +0000 UTC m=+2170.979352055" watchObservedRunningTime="2026-01-29 11:35:05.106338155 +0000 UTC m=+2170.979372346" Jan 29 11:35:07 crc kubenswrapper[4593]: I0129 11:35:07.376410 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:07 crc kubenswrapper[4593]: I0129 11:35:07.376830 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:08 crc kubenswrapper[4593]: I0129 11:35:08.424067 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5nlmk" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" probeResult="failure" output=< Jan 29 11:35:08 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:35:08 crc kubenswrapper[4593]: > Jan 29 11:35:16 crc kubenswrapper[4593]: I0129 11:35:16.177512 4593 generic.go:334] "Generic (PLEG): container finished" podID="80db2d7c-94e6-418b-a0b4-2b4064356e4b" containerID="3d1b42f49400161b1d8c95796bd799e62ffe6e307b7fcee26199ead4efaeeb5f" exitCode=0 Jan 29 11:35:16 crc kubenswrapper[4593]: I0129 11:35:16.178291 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" event={"ID":"80db2d7c-94e6-418b-a0b4-2b4064356e4b","Type":"ContainerDied","Data":"3d1b42f49400161b1d8c95796bd799e62ffe6e307b7fcee26199ead4efaeeb5f"} Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.536965 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.668771 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.761297 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.923548 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.923886 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.923908 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.924078 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.924150 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") pod \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\" (UID: \"80db2d7c-94e6-418b-a0b4-2b4064356e4b\") " Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.952825 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.960857 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg" (OuterVolumeSpecName: "kube-api-access-7nkgg") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "kube-api-access-7nkgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.971338 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.971555 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory" (OuterVolumeSpecName: "inventory") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:35:17 crc kubenswrapper[4593]: I0129 11:35:17.972417 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "80db2d7c-94e6-418b-a0b4-2b4064356e4b" (UID: "80db2d7c-94e6-418b-a0b4-2b4064356e4b"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026445 4593 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026494 4593 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026506 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026521 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80db2d7c-94e6-418b-a0b4-2b4064356e4b-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.026532 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nkgg\" (UniqueName: \"kubernetes.io/projected/80db2d7c-94e6-418b-a0b4-2b4064356e4b-kube-api-access-7nkgg\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.202351 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" event={"ID":"80db2d7c-94e6-418b-a0b4-2b4064356e4b","Type":"ContainerDied","Data":"7b36f3307cde3252ef687db46ed25297713e29f6036f5d4211d41f1c07171c14"} Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.202398 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b36f3307cde3252ef687db46ed25297713e29f6036f5d4211d41f1c07171c14" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.202719 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ftxjl" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.411975 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct"] Jan 29 11:35:18 crc kubenswrapper[4593]: E0129 11:35:18.412455 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80db2d7c-94e6-418b-a0b4-2b4064356e4b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.412487 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="80db2d7c-94e6-418b-a0b4-2b4064356e4b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.412805 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="80db2d7c-94e6-418b-a0b4-2b4064356e4b" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.413610 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.415331 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.415393 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.415905 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.416048 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.416829 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.417064 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.445166 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct"] Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534240 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534510 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534592 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534659 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534690 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.534764 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636224 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636283 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636311 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636333 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636368 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.636456 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.641254 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.641410 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.641616 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.642537 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.643144 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.661171 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:18 crc kubenswrapper[4593]: I0129 11:35:18.733236 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:35:19 crc kubenswrapper[4593]: I0129 11:35:19.296250 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct"] Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.220611 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" event={"ID":"4c7cff3f-040a-4499-825c-3cccd015326a","Type":"ContainerStarted","Data":"3d1228225b6ffd897296a865f985eb25440e60005ab6ac0ae135485a6d691258"} Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.221120 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" event={"ID":"4c7cff3f-040a-4499-825c-3cccd015326a","Type":"ContainerStarted","Data":"b6601232a02e3d92b3cca5f75209114738f2a4a3ccaef37ffa707cfb7625bc91"} Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.246900 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" podStartSLOduration=1.7392019140000001 podStartE2EDuration="2.246874561s" podCreationTimestamp="2026-01-29 11:35:18 +0000 UTC" firstStartedPulling="2026-01-29 11:35:19.302714047 +0000 UTC m=+2185.175748238" lastFinishedPulling="2026-01-29 11:35:19.810386694 +0000 UTC m=+2185.683420885" observedRunningTime="2026-01-29 11:35:20.237948199 +0000 UTC m=+2186.110982410" watchObservedRunningTime="2026-01-29 11:35:20.246874561 +0000 UTC m=+2186.119908762" Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.529298 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:35:20 crc kubenswrapper[4593]: I0129 11:35:20.529999 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5nlmk" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" containerID="cri-o://a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" gracePeriod=2 Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.002462 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.094603 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") pod \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.094720 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") pod \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.094857 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") pod \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\" (UID: \"67c4381e-f9c8-4453-8680-3ee5fab8d1f2\") " Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.095796 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities" (OuterVolumeSpecName: "utilities") pod "67c4381e-f9c8-4453-8680-3ee5fab8d1f2" (UID: "67c4381e-f9c8-4453-8680-3ee5fab8d1f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.117323 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z" (OuterVolumeSpecName: "kube-api-access-4b68z") pod "67c4381e-f9c8-4453-8680-3ee5fab8d1f2" (UID: "67c4381e-f9c8-4453-8680-3ee5fab8d1f2"). InnerVolumeSpecName "kube-api-access-4b68z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.152389 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67c4381e-f9c8-4453-8680-3ee5fab8d1f2" (UID: "67c4381e-f9c8-4453-8680-3ee5fab8d1f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.197618 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b68z\" (UniqueName: \"kubernetes.io/projected/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-kube-api-access-4b68z\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.197664 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.197678 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c4381e-f9c8-4453-8680-3ee5fab8d1f2-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.230276 4593 generic.go:334] "Generic (PLEG): container finished" podID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerID="a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" exitCode=0 Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.231138 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nlmk" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.233773 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerDied","Data":"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e"} Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.233817 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nlmk" event={"ID":"67c4381e-f9c8-4453-8680-3ee5fab8d1f2","Type":"ContainerDied","Data":"f9a56a74fc7daa58d106bd12a56a8706dbae0e26b7157708545017068760372e"} Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.233842 4593 scope.go:117] "RemoveContainer" containerID="a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.272231 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.272966 4593 scope.go:117] "RemoveContainer" containerID="74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.280651 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5nlmk"] Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.294054 4593 scope.go:117] "RemoveContainer" containerID="7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.352722 4593 scope.go:117] "RemoveContainer" containerID="a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" Jan 29 11:35:21 crc kubenswrapper[4593]: E0129 11:35:21.353256 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e\": container with ID starting with a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e not found: ID does not exist" containerID="a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.353300 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e"} err="failed to get container status \"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e\": rpc error: code = NotFound desc = could not find container \"a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e\": container with ID starting with a016b50eba882b51d1fbb5638a8d6078af89cdbdc31aa8d9358399358bfcfa8e not found: ID does not exist" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.353329 4593 scope.go:117] "RemoveContainer" containerID="74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711" Jan 29 11:35:21 crc kubenswrapper[4593]: E0129 11:35:21.353788 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711\": container with ID starting with 74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711 not found: ID does not exist" containerID="74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.353817 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711"} err="failed to get container status \"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711\": rpc error: code = NotFound desc = could not find container \"74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711\": container with ID starting with 74d480ab83f22e68cd7c435ee8e14833d75c66ec9e6a546ce97fe66211ff3711 not found: ID does not exist" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.353842 4593 scope.go:117] "RemoveContainer" containerID="7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff" Jan 29 11:35:21 crc kubenswrapper[4593]: E0129 11:35:21.354138 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff\": container with ID starting with 7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff not found: ID does not exist" containerID="7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff" Jan 29 11:35:21 crc kubenswrapper[4593]: I0129 11:35:21.354161 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff"} err="failed to get container status \"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff\": rpc error: code = NotFound desc = could not find container \"7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff\": container with ID starting with 7daf072a473270a9342ce76b469637775fe6f66141ab7e5229ae058d21e5a6ff not found: ID does not exist" Jan 29 11:35:23 crc kubenswrapper[4593]: I0129 11:35:23.089117 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" path="/var/lib/kubelet/pods/67c4381e-f9c8-4453-8680-3ee5fab8d1f2/volumes" Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.946598 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.947162 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.947241 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.948163 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:35:33 crc kubenswrapper[4593]: I0129 11:35:33.948321 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972" gracePeriod=600 Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.357695 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972" exitCode=0 Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.357740 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972"} Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.357784 4593 scope.go:117] "RemoveContainer" containerID="d9bec8beb5dfaa0d20c9211161eb9eee5ea3a5ff506e092f37543c6a238e36ec" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365064 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:34 crc kubenswrapper[4593]: E0129 11:35:34.365479 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365501 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" Jan 29 11:35:34 crc kubenswrapper[4593]: E0129 11:35:34.365511 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="extract-utilities" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365517 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="extract-utilities" Jan 29 11:35:34 crc kubenswrapper[4593]: E0129 11:35:34.365541 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="extract-content" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365547 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="extract-content" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.365732 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c4381e-f9c8-4453-8680-3ee5fab8d1f2" containerName="registry-server" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.367018 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.379233 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.475549 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.475704 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.475760 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577106 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577380 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577546 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577762 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.577864 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.605294 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") pod \"redhat-marketplace-7ftts\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:34 crc kubenswrapper[4593]: I0129 11:35:34.691088 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:35 crc kubenswrapper[4593]: I0129 11:35:35.178299 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:35 crc kubenswrapper[4593]: I0129 11:35:35.367771 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerStarted","Data":"4b06ea4e929072566d99822da48350f4d7a6964940570100bca4e50927cfff13"} Jan 29 11:35:37 crc kubenswrapper[4593]: I0129 11:35:37.388548 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed"} Jan 29 11:35:37 crc kubenswrapper[4593]: I0129 11:35:37.390887 4593 generic.go:334] "Generic (PLEG): container finished" podID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerID="aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff" exitCode=0 Jan 29 11:35:37 crc kubenswrapper[4593]: I0129 11:35:37.390927 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerDied","Data":"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff"} Jan 29 11:35:38 crc kubenswrapper[4593]: I0129 11:35:38.404428 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerStarted","Data":"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c"} Jan 29 11:35:40 crc kubenswrapper[4593]: I0129 11:35:40.446932 4593 generic.go:334] "Generic (PLEG): container finished" podID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerID="99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c" exitCode=0 Jan 29 11:35:40 crc kubenswrapper[4593]: I0129 11:35:40.447019 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerDied","Data":"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c"} Jan 29 11:35:41 crc kubenswrapper[4593]: I0129 11:35:41.459458 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerStarted","Data":"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744"} Jan 29 11:35:41 crc kubenswrapper[4593]: I0129 11:35:41.487805 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7ftts" podStartSLOduration=3.85781173 podStartE2EDuration="7.487779324s" podCreationTimestamp="2026-01-29 11:35:34 +0000 UTC" firstStartedPulling="2026-01-29 11:35:37.392200392 +0000 UTC m=+2203.265234583" lastFinishedPulling="2026-01-29 11:35:41.022167986 +0000 UTC m=+2206.895202177" observedRunningTime="2026-01-29 11:35:41.479036317 +0000 UTC m=+2207.352070518" watchObservedRunningTime="2026-01-29 11:35:41.487779324 +0000 UTC m=+2207.360813515" Jan 29 11:35:44 crc kubenswrapper[4593]: I0129 11:35:44.691590 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:44 crc kubenswrapper[4593]: I0129 11:35:44.693079 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:44 crc kubenswrapper[4593]: I0129 11:35:44.749293 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:46 crc kubenswrapper[4593]: I0129 11:35:46.558151 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:46 crc kubenswrapper[4593]: I0129 11:35:46.622912 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:48 crc kubenswrapper[4593]: I0129 11:35:48.518422 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7ftts" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="registry-server" containerID="cri-o://1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" gracePeriod=2 Jan 29 11:35:48 crc kubenswrapper[4593]: I0129 11:35:48.975130 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.120281 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") pod \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.122123 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") pod \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.123848 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities" (OuterVolumeSpecName: "utilities") pod "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" (UID: "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.124836 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") pod \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\" (UID: \"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4\") " Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.128641 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.137296 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv" (OuterVolumeSpecName: "kube-api-access-w7vdv") pod "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" (UID: "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4"). InnerVolumeSpecName "kube-api-access-w7vdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.150989 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" (UID: "1ac08e15-d0dc-4f0e-8704-c1ab168d73c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.232101 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7vdv\" (UniqueName: \"kubernetes.io/projected/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-kube-api-access-w7vdv\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.232139 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531585 4593 generic.go:334] "Generic (PLEG): container finished" podID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerID="1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" exitCode=0 Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531620 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerDied","Data":"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744"} Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531660 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7ftts" event={"ID":"1ac08e15-d0dc-4f0e-8704-c1ab168d73c4","Type":"ContainerDied","Data":"4b06ea4e929072566d99822da48350f4d7a6964940570100bca4e50927cfff13"} Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531679 4593 scope.go:117] "RemoveContainer" containerID="1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.531735 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7ftts" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.580303 4593 scope.go:117] "RemoveContainer" containerID="99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.588590 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.607436 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7ftts"] Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.613485 4593 scope.go:117] "RemoveContainer" containerID="aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.655686 4593 scope.go:117] "RemoveContainer" containerID="1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" Jan 29 11:35:49 crc kubenswrapper[4593]: E0129 11:35:49.656454 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744\": container with ID starting with 1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744 not found: ID does not exist" containerID="1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.656594 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744"} err="failed to get container status \"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744\": rpc error: code = NotFound desc = could not find container \"1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744\": container with ID starting with 1fdbda5d794976022e78313b0e20be9d6d59becb29f897ce47612e2cb3f08744 not found: ID does not exist" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.656743 4593 scope.go:117] "RemoveContainer" containerID="99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c" Jan 29 11:35:49 crc kubenswrapper[4593]: E0129 11:35:49.657368 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c\": container with ID starting with 99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c not found: ID does not exist" containerID="99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.657512 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c"} err="failed to get container status \"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c\": rpc error: code = NotFound desc = could not find container \"99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c\": container with ID starting with 99be25b109404cd94d778893b108553c39d2f07fdff700b7b78ba209ae8de92c not found: ID does not exist" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.657622 4593 scope.go:117] "RemoveContainer" containerID="aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff" Jan 29 11:35:49 crc kubenswrapper[4593]: E0129 11:35:49.658090 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff\": container with ID starting with aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff not found: ID does not exist" containerID="aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff" Jan 29 11:35:49 crc kubenswrapper[4593]: I0129 11:35:49.658136 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff"} err="failed to get container status \"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff\": rpc error: code = NotFound desc = could not find container \"aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff\": container with ID starting with aca2b65b9701618df8d690cfd050401d6943822d0cdac68f29815b3060b3d6ff not found: ID does not exist" Jan 29 11:35:51 crc kubenswrapper[4593]: I0129 11:35:51.085877 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" path="/var/lib/kubelet/pods/1ac08e15-d0dc-4f0e-8704-c1ab168d73c4/volumes" Jan 29 11:36:11 crc kubenswrapper[4593]: I0129 11:36:11.759234 4593 generic.go:334] "Generic (PLEG): container finished" podID="4c7cff3f-040a-4499-825c-3cccd015326a" containerID="3d1228225b6ffd897296a865f985eb25440e60005ab6ac0ae135485a6d691258" exitCode=0 Jan 29 11:36:11 crc kubenswrapper[4593]: I0129 11:36:11.759369 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" event={"ID":"4c7cff3f-040a-4499-825c-3cccd015326a","Type":"ContainerDied","Data":"3d1228225b6ffd897296a865f985eb25440e60005ab6ac0ae135485a6d691258"} Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.227670 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.338400 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.339625 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.339918 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.340036 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.340218 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.340947 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") pod \"4c7cff3f-040a-4499-825c-3cccd015326a\" (UID: \"4c7cff3f-040a-4499-825c-3cccd015326a\") " Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.344338 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb" (OuterVolumeSpecName: "kube-api-access-gxpkb") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "kube-api-access-gxpkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.348815 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.369920 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.371173 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory" (OuterVolumeSpecName: "inventory") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.375222 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.378682 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4c7cff3f-040a-4499-825c-3cccd015326a" (UID: "4c7cff3f-040a-4499-825c-3cccd015326a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443778 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxpkb\" (UniqueName: \"kubernetes.io/projected/4c7cff3f-040a-4499-825c-3cccd015326a-kube-api-access-gxpkb\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443809 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443822 4593 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443834 4593 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443843 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.443851 4593 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4c7cff3f-040a-4499-825c-3cccd015326a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.782752 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" event={"ID":"4c7cff3f-040a-4499-825c-3cccd015326a","Type":"ContainerDied","Data":"b6601232a02e3d92b3cca5f75209114738f2a4a3ccaef37ffa707cfb7625bc91"} Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.782793 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6601232a02e3d92b3cca5f75209114738f2a4a3ccaef37ffa707cfb7625bc91" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.782849 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.915910 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j"] Jan 29 11:36:13 crc kubenswrapper[4593]: E0129 11:36:13.916389 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7cff3f-040a-4499-825c-3cccd015326a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916413 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7cff3f-040a-4499-825c-3cccd015326a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 11:36:13 crc kubenswrapper[4593]: E0129 11:36:13.916427 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="extract-utilities" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916435 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="extract-utilities" Jan 29 11:36:13 crc kubenswrapper[4593]: E0129 11:36:13.916457 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="extract-content" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916466 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="extract-content" Jan 29 11:36:13 crc kubenswrapper[4593]: E0129 11:36:13.916493 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="registry-server" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916504 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="registry-server" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916723 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c7cff3f-040a-4499-825c-3cccd015326a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.916756 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ac08e15-d0dc-4f0e-8704-c1ab168d73c4" containerName="registry-server" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.917588 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.921344 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.921726 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.922017 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.922411 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.922698 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.937214 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j"] Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.952713 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.952947 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.953174 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.953259 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:13 crc kubenswrapper[4593]: I0129 11:36:13.953368 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055203 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055556 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055675 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055777 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.055855 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.060722 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.061060 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.063045 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.064487 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.075449 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jt98j\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.242507 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.784876 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j"] Jan 29 11:36:14 crc kubenswrapper[4593]: I0129 11:36:14.802787 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" event={"ID":"1f7fe168-4498-4002-9233-d6c2d9f115fb","Type":"ContainerStarted","Data":"630e5bb315500c97ee35063cb0b1025dae526568ec5b2fc147514f582e1d824e"} Jan 29 11:36:15 crc kubenswrapper[4593]: I0129 11:36:15.855591 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" event={"ID":"1f7fe168-4498-4002-9233-d6c2d9f115fb","Type":"ContainerStarted","Data":"d0dad791e1b4a4ce15ef06b2c8538abd555b7ecb9305ee001925866de13618a6"} Jan 29 11:36:15 crc kubenswrapper[4593]: I0129 11:36:15.888952 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" podStartSLOduration=2.407420906 podStartE2EDuration="2.888922484s" podCreationTimestamp="2026-01-29 11:36:13 +0000 UTC" firstStartedPulling="2026-01-29 11:36:14.788251527 +0000 UTC m=+2240.661285718" lastFinishedPulling="2026-01-29 11:36:15.269753105 +0000 UTC m=+2241.142787296" observedRunningTime="2026-01-29 11:36:15.879264792 +0000 UTC m=+2241.752298983" watchObservedRunningTime="2026-01-29 11:36:15.888922484 +0000 UTC m=+2241.761956675" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.117431 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.120118 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.126670 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.208191 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.208275 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.208389 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310180 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310282 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310323 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310891 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.310926 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.340552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") pod \"certified-operators-ppw2m\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:53 crc kubenswrapper[4593]: I0129 11:37:53.459880 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:37:54 crc kubenswrapper[4593]: I0129 11:37:54.048039 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:37:54 crc kubenswrapper[4593]: I0129 11:37:54.952019 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerDied","Data":"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db"} Jan 29 11:37:54 crc kubenswrapper[4593]: I0129 11:37:54.951831 4593 generic.go:334] "Generic (PLEG): container finished" podID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerID="83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db" exitCode=0 Jan 29 11:37:54 crc kubenswrapper[4593]: I0129 11:37:54.952573 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerStarted","Data":"91ae26aa44dcad5158d3b712c06f9da2552490c44bd51cd521f017e0fab71b0b"} Jan 29 11:37:56 crc kubenswrapper[4593]: I0129 11:37:56.976308 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerStarted","Data":"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d"} Jan 29 11:38:00 crc kubenswrapper[4593]: I0129 11:38:00.011697 4593 generic.go:334] "Generic (PLEG): container finished" podID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerID="785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d" exitCode=0 Jan 29 11:38:00 crc kubenswrapper[4593]: I0129 11:38:00.011783 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerDied","Data":"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d"} Jan 29 11:38:01 crc kubenswrapper[4593]: I0129 11:38:01.025669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerStarted","Data":"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a"} Jan 29 11:38:01 crc kubenswrapper[4593]: I0129 11:38:01.057699 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ppw2m" podStartSLOduration=2.569719041 podStartE2EDuration="8.057666229s" podCreationTimestamp="2026-01-29 11:37:53 +0000 UTC" firstStartedPulling="2026-01-29 11:37:54.956648186 +0000 UTC m=+2340.829682377" lastFinishedPulling="2026-01-29 11:38:00.444595364 +0000 UTC m=+2346.317629565" observedRunningTime="2026-01-29 11:38:01.047276945 +0000 UTC m=+2346.920311146" watchObservedRunningTime="2026-01-29 11:38:01.057666229 +0000 UTC m=+2346.930700420" Jan 29 11:38:03 crc kubenswrapper[4593]: I0129 11:38:03.460032 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:03 crc kubenswrapper[4593]: I0129 11:38:03.460533 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:03 crc kubenswrapper[4593]: I0129 11:38:03.946511 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:38:03 crc kubenswrapper[4593]: I0129 11:38:03.946594 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:38:04 crc kubenswrapper[4593]: I0129 11:38:04.503494 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ppw2m" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" probeResult="failure" output=< Jan 29 11:38:04 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:38:04 crc kubenswrapper[4593]: > Jan 29 11:38:13 crc kubenswrapper[4593]: I0129 11:38:13.504210 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:13 crc kubenswrapper[4593]: I0129 11:38:13.551481 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:13 crc kubenswrapper[4593]: I0129 11:38:13.744626 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.184871 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ppw2m" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" containerID="cri-o://acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" gracePeriod=2 Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.660373 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.779059 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") pod \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.779510 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") pod \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.779574 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") pod \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\" (UID: \"15a2cd22-170c-4450-accf-d5d0a7f5a7f7\") " Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.781442 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities" (OuterVolumeSpecName: "utilities") pod "15a2cd22-170c-4450-accf-d5d0a7f5a7f7" (UID: "15a2cd22-170c-4450-accf-d5d0a7f5a7f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.795968 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx" (OuterVolumeSpecName: "kube-api-access-xfmfx") pod "15a2cd22-170c-4450-accf-d5d0a7f5a7f7" (UID: "15a2cd22-170c-4450-accf-d5d0a7f5a7f7"). InnerVolumeSpecName "kube-api-access-xfmfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.839348 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15a2cd22-170c-4450-accf-d5d0a7f5a7f7" (UID: "15a2cd22-170c-4450-accf-d5d0a7f5a7f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.881884 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.881917 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfmfx\" (UniqueName: \"kubernetes.io/projected/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-kube-api-access-xfmfx\") on node \"crc\" DevicePath \"\"" Jan 29 11:38:15 crc kubenswrapper[4593]: I0129 11:38:15.881930 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a2cd22-170c-4450-accf-d5d0a7f5a7f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197001 4593 generic.go:334] "Generic (PLEG): container finished" podID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerID="acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" exitCode=0 Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197066 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerDied","Data":"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a"} Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197092 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ppw2m" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197124 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ppw2m" event={"ID":"15a2cd22-170c-4450-accf-d5d0a7f5a7f7","Type":"ContainerDied","Data":"91ae26aa44dcad5158d3b712c06f9da2552490c44bd51cd521f017e0fab71b0b"} Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.197176 4593 scope.go:117] "RemoveContainer" containerID="acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.243301 4593 scope.go:117] "RemoveContainer" containerID="785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.251073 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.265071 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ppw2m"] Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.266089 4593 scope.go:117] "RemoveContainer" containerID="83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.318667 4593 scope.go:117] "RemoveContainer" containerID="acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" Jan 29 11:38:16 crc kubenswrapper[4593]: E0129 11:38:16.319069 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a\": container with ID starting with acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a not found: ID does not exist" containerID="acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319108 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a"} err="failed to get container status \"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a\": rpc error: code = NotFound desc = could not find container \"acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a\": container with ID starting with acefa20b6c2a7eb8f2abb5f726a5732dcd40e969e6dc8c07f6e7016031fcae6a not found: ID does not exist" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319178 4593 scope.go:117] "RemoveContainer" containerID="785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d" Jan 29 11:38:16 crc kubenswrapper[4593]: E0129 11:38:16.319389 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d\": container with ID starting with 785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d not found: ID does not exist" containerID="785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319409 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d"} err="failed to get container status \"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d\": rpc error: code = NotFound desc = could not find container \"785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d\": container with ID starting with 785574bc568a7ca45a6d2e331a981b754cdc997f147177f3ed5886d0e204820d not found: ID does not exist" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319423 4593 scope.go:117] "RemoveContainer" containerID="83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db" Jan 29 11:38:16 crc kubenswrapper[4593]: E0129 11:38:16.319806 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db\": container with ID starting with 83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db not found: ID does not exist" containerID="83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db" Jan 29 11:38:16 crc kubenswrapper[4593]: I0129 11:38:16.319828 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db"} err="failed to get container status \"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db\": rpc error: code = NotFound desc = could not find container \"83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db\": container with ID starting with 83534a665108eec509e82cee041b5066f18047792c8dd75b6085bf9d67c580db not found: ID does not exist" Jan 29 11:38:17 crc kubenswrapper[4593]: I0129 11:38:17.092479 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" path="/var/lib/kubelet/pods/15a2cd22-170c-4450-accf-d5d0a7f5a7f7/volumes" Jan 29 11:38:33 crc kubenswrapper[4593]: I0129 11:38:33.945967 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:38:33 crc kubenswrapper[4593]: I0129 11:38:33.946599 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.946779 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.947561 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.947615 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.948651 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:39:03 crc kubenswrapper[4593]: I0129 11:39:03.948819 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" gracePeriod=600 Jan 29 11:39:04 crc kubenswrapper[4593]: E0129 11:39:04.074456 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:04 crc kubenswrapper[4593]: I0129 11:39:04.647962 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" exitCode=0 Jan 29 11:39:04 crc kubenswrapper[4593]: I0129 11:39:04.648015 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed"} Jan 29 11:39:04 crc kubenswrapper[4593]: I0129 11:39:04.648130 4593 scope.go:117] "RemoveContainer" containerID="466a93f4cbc41eff7fb78889db6079a8dd1f4541d541aedd9f60554c729b2972" Jan 29 11:39:04 crc kubenswrapper[4593]: I0129 11:39:04.649022 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:04 crc kubenswrapper[4593]: E0129 11:39:04.649756 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:18 crc kubenswrapper[4593]: I0129 11:39:18.075811 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:18 crc kubenswrapper[4593]: E0129 11:39:18.077039 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:30 crc kubenswrapper[4593]: I0129 11:39:30.076954 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:30 crc kubenswrapper[4593]: E0129 11:39:30.077745 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:45 crc kubenswrapper[4593]: I0129 11:39:45.081811 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:45 crc kubenswrapper[4593]: E0129 11:39:45.082690 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:39:59 crc kubenswrapper[4593]: I0129 11:39:59.075417 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:39:59 crc kubenswrapper[4593]: E0129 11:39:59.076383 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:13 crc kubenswrapper[4593]: I0129 11:40:13.075662 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:13 crc kubenswrapper[4593]: E0129 11:40:13.076458 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:26 crc kubenswrapper[4593]: I0129 11:40:26.077016 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:26 crc kubenswrapper[4593]: E0129 11:40:26.079694 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:37 crc kubenswrapper[4593]: I0129 11:40:37.074903 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:37 crc kubenswrapper[4593]: E0129 11:40:37.075549 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:48 crc kubenswrapper[4593]: I0129 11:40:48.075334 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:48 crc kubenswrapper[4593]: E0129 11:40:48.077334 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:40:59 crc kubenswrapper[4593]: I0129 11:40:59.077417 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:40:59 crc kubenswrapper[4593]: E0129 11:40:59.078312 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:41:07 crc kubenswrapper[4593]: I0129 11:41:07.862789 4593 generic.go:334] "Generic (PLEG): container finished" podID="1f7fe168-4498-4002-9233-d6c2d9f115fb" containerID="d0dad791e1b4a4ce15ef06b2c8538abd555b7ecb9305ee001925866de13618a6" exitCode=0 Jan 29 11:41:07 crc kubenswrapper[4593]: I0129 11:41:07.862910 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" event={"ID":"1f7fe168-4498-4002-9233-d6c2d9f115fb","Type":"ContainerDied","Data":"d0dad791e1b4a4ce15ef06b2c8538abd555b7ecb9305ee001925866de13618a6"} Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.296567 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439397 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439476 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439526 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439598 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.439659 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") pod \"1f7fe168-4498-4002-9233-d6c2d9f115fb\" (UID: \"1f7fe168-4498-4002-9233-d6c2d9f115fb\") " Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.450456 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.451553 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm" (OuterVolumeSpecName: "kube-api-access-w9vrm") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "kube-api-access-w9vrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.487058 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory" (OuterVolumeSpecName: "inventory") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.488978 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.500227 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1f7fe168-4498-4002-9233-d6c2d9f115fb" (UID: "1f7fe168-4498-4002-9233-d6c2d9f115fb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541493 4593 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541546 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541561 4593 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541579 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9vrm\" (UniqueName: \"kubernetes.io/projected/1f7fe168-4498-4002-9233-d6c2d9f115fb-kube-api-access-w9vrm\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.541594 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f7fe168-4498-4002-9233-d6c2d9f115fb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.930532 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" event={"ID":"1f7fe168-4498-4002-9233-d6c2d9f115fb","Type":"ContainerDied","Data":"630e5bb315500c97ee35063cb0b1025dae526568ec5b2fc147514f582e1d824e"} Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.930582 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630e5bb315500c97ee35063cb0b1025dae526568ec5b2fc147514f582e1d824e" Jan 29 11:41:09 crc kubenswrapper[4593]: I0129 11:41:09.930617 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jt98j" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014026 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg"] Jan 29 11:41:10 crc kubenswrapper[4593]: E0129 11:41:10.014440 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="extract-content" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014460 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="extract-content" Jan 29 11:41:10 crc kubenswrapper[4593]: E0129 11:41:10.014468 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7fe168-4498-4002-9233-d6c2d9f115fb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014475 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7fe168-4498-4002-9233-d6c2d9f115fb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 11:41:10 crc kubenswrapper[4593]: E0129 11:41:10.014488 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014495 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" Jan 29 11:41:10 crc kubenswrapper[4593]: E0129 11:41:10.014512 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="extract-utilities" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014521 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="extract-utilities" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014753 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a2cd22-170c-4450-accf-d5d0a7f5a7f7" containerName="registry-server" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.014772 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7fe168-4498-4002-9233-d6c2d9f115fb" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.015483 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.017604 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.021522 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.021794 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.022338 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.022513 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.022360 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.022982 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.039718 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg"] Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088677 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088721 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088792 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088845 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088864 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088889 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.088927 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.089643 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.089831 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.191879 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192210 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192263 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192306 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192344 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192379 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192395 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192453 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.192530 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.194450 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.196675 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.197644 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.199312 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.200004 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.200272 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.202051 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.204121 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.215008 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rtfdg\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.332726 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.872942 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg"] Jan 29 11:41:10 crc kubenswrapper[4593]: W0129 11:41:10.878872 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf45f3aca_42e1_4105_b843_f5288550ce8c.slice/crio-3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2 WatchSource:0}: Error finding container 3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2: Status 404 returned error can't find the container with id 3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2 Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.882593 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:41:10 crc kubenswrapper[4593]: I0129 11:41:10.944215 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" event={"ID":"f45f3aca-42e1-4105-b843-f5288550ce8c","Type":"ContainerStarted","Data":"3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2"} Jan 29 11:41:11 crc kubenswrapper[4593]: I0129 11:41:11.954352 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" event={"ID":"f45f3aca-42e1-4105-b843-f5288550ce8c","Type":"ContainerStarted","Data":"b57226db838e93862713f292f9315141a4f22f891753ea3cbd93990d176edcc4"} Jan 29 11:41:11 crc kubenswrapper[4593]: I0129 11:41:11.976811 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" podStartSLOduration=2.373611192 podStartE2EDuration="2.976793336s" podCreationTimestamp="2026-01-29 11:41:09 +0000 UTC" firstStartedPulling="2026-01-29 11:41:10.882185286 +0000 UTC m=+2536.755219487" lastFinishedPulling="2026-01-29 11:41:11.4853674 +0000 UTC m=+2537.358401631" observedRunningTime="2026-01-29 11:41:11.969725963 +0000 UTC m=+2537.842760154" watchObservedRunningTime="2026-01-29 11:41:11.976793336 +0000 UTC m=+2537.849827527" Jan 29 11:41:14 crc kubenswrapper[4593]: I0129 11:41:14.075509 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:41:14 crc kubenswrapper[4593]: E0129 11:41:14.076317 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:41:26 crc kubenswrapper[4593]: I0129 11:41:26.075750 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:41:26 crc kubenswrapper[4593]: E0129 11:41:26.076703 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:41:39 crc kubenswrapper[4593]: I0129 11:41:39.075284 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:41:39 crc kubenswrapper[4593]: E0129 11:41:39.076013 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:41:51 crc kubenswrapper[4593]: I0129 11:41:51.075521 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:41:51 crc kubenswrapper[4593]: E0129 11:41:51.076231 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:03 crc kubenswrapper[4593]: I0129 11:42:03.075256 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:03 crc kubenswrapper[4593]: E0129 11:42:03.076042 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.713532 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.717575 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.737165 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.743957 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.744078 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.744127 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.845693 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.845979 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.846093 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.846193 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.846400 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:05 crc kubenswrapper[4593]: I0129 11:42:05.867576 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") pod \"redhat-operators-drmwg\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:06 crc kubenswrapper[4593]: I0129 11:42:06.053491 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:06 crc kubenswrapper[4593]: I0129 11:42:06.593235 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:42:06 crc kubenswrapper[4593]: I0129 11:42:06.648483 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerStarted","Data":"41ece5201791f94fd3acfd06ed7b4e84ad465e9e4c76175fabaa5d1d99f6ff8c"} Jan 29 11:42:07 crc kubenswrapper[4593]: I0129 11:42:07.658456 4593 generic.go:334] "Generic (PLEG): container finished" podID="69e48707-1458-40da-aa50-9f79ccef1297" containerID="546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee" exitCode=0 Jan 29 11:42:07 crc kubenswrapper[4593]: I0129 11:42:07.658509 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerDied","Data":"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee"} Jan 29 11:42:09 crc kubenswrapper[4593]: I0129 11:42:09.685774 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerStarted","Data":"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a"} Jan 29 11:42:14 crc kubenswrapper[4593]: I0129 11:42:14.747523 4593 generic.go:334] "Generic (PLEG): container finished" podID="69e48707-1458-40da-aa50-9f79ccef1297" containerID="4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a" exitCode=0 Jan 29 11:42:14 crc kubenswrapper[4593]: I0129 11:42:14.747669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerDied","Data":"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a"} Jan 29 11:42:15 crc kubenswrapper[4593]: I0129 11:42:15.082882 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:15 crc kubenswrapper[4593]: E0129 11:42:15.083557 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:15 crc kubenswrapper[4593]: I0129 11:42:15.763149 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerStarted","Data":"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8"} Jan 29 11:42:15 crc kubenswrapper[4593]: I0129 11:42:15.797563 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-drmwg" podStartSLOduration=3.1882229029999998 podStartE2EDuration="10.79753768s" podCreationTimestamp="2026-01-29 11:42:05 +0000 UTC" firstStartedPulling="2026-01-29 11:42:07.660485001 +0000 UTC m=+2593.533519192" lastFinishedPulling="2026-01-29 11:42:15.269799778 +0000 UTC m=+2601.142833969" observedRunningTime="2026-01-29 11:42:15.792417551 +0000 UTC m=+2601.665451742" watchObservedRunningTime="2026-01-29 11:42:15.79753768 +0000 UTC m=+2601.670571871" Jan 29 11:42:16 crc kubenswrapper[4593]: I0129 11:42:16.054840 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:16 crc kubenswrapper[4593]: I0129 11:42:16.054914 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:42:17 crc kubenswrapper[4593]: I0129 11:42:17.099450 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:17 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:17 crc kubenswrapper[4593]: > Jan 29 11:42:27 crc kubenswrapper[4593]: I0129 11:42:27.105873 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:27 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:27 crc kubenswrapper[4593]: > Jan 29 11:42:29 crc kubenswrapper[4593]: I0129 11:42:29.077750 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:29 crc kubenswrapper[4593]: E0129 11:42:29.078309 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:37 crc kubenswrapper[4593]: I0129 11:42:37.101225 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:37 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:37 crc kubenswrapper[4593]: > Jan 29 11:42:43 crc kubenswrapper[4593]: I0129 11:42:43.075525 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:43 crc kubenswrapper[4593]: E0129 11:42:43.076219 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:47 crc kubenswrapper[4593]: I0129 11:42:47.099502 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:47 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:47 crc kubenswrapper[4593]: > Jan 29 11:42:57 crc kubenswrapper[4593]: I0129 11:42:57.075143 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:42:57 crc kubenswrapper[4593]: E0129 11:42:57.075883 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:42:57 crc kubenswrapper[4593]: I0129 11:42:57.101955 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" probeResult="failure" output=< Jan 29 11:42:57 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:42:57 crc kubenswrapper[4593]: > Jan 29 11:43:06 crc kubenswrapper[4593]: I0129 11:43:06.112189 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:43:06 crc kubenswrapper[4593]: I0129 11:43:06.163301 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:43:06 crc kubenswrapper[4593]: I0129 11:43:06.936134 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:43:07 crc kubenswrapper[4593]: I0129 11:43:07.299180 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-drmwg" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" containerID="cri-o://e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" gracePeriod=2 Jan 29 11:43:07 crc kubenswrapper[4593]: I0129 11:43:07.820491 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.015327 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") pod \"69e48707-1458-40da-aa50-9f79ccef1297\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.015481 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") pod \"69e48707-1458-40da-aa50-9f79ccef1297\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.015782 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") pod \"69e48707-1458-40da-aa50-9f79ccef1297\" (UID: \"69e48707-1458-40da-aa50-9f79ccef1297\") " Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.016319 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities" (OuterVolumeSpecName: "utilities") pod "69e48707-1458-40da-aa50-9f79ccef1297" (UID: "69e48707-1458-40da-aa50-9f79ccef1297"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.021563 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb" (OuterVolumeSpecName: "kube-api-access-mnxkb") pod "69e48707-1458-40da-aa50-9f79ccef1297" (UID: "69e48707-1458-40da-aa50-9f79ccef1297"). InnerVolumeSpecName "kube-api-access-mnxkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.118158 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnxkb\" (UniqueName: \"kubernetes.io/projected/69e48707-1458-40da-aa50-9f79ccef1297-kube-api-access-mnxkb\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.118205 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.144195 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69e48707-1458-40da-aa50-9f79ccef1297" (UID: "69e48707-1458-40da-aa50-9f79ccef1297"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.220202 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e48707-1458-40da-aa50-9f79ccef1297-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.320974 4593 generic.go:334] "Generic (PLEG): container finished" podID="69e48707-1458-40da-aa50-9f79ccef1297" containerID="e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" exitCode=0 Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.321043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerDied","Data":"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8"} Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.321088 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-drmwg" event={"ID":"69e48707-1458-40da-aa50-9f79ccef1297","Type":"ContainerDied","Data":"41ece5201791f94fd3acfd06ed7b4e84ad465e9e4c76175fabaa5d1d99f6ff8c"} Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.321109 4593 scope.go:117] "RemoveContainer" containerID="e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.321337 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-drmwg" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.367959 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.369181 4593 scope.go:117] "RemoveContainer" containerID="4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.378668 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-drmwg"] Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.401885 4593 scope.go:117] "RemoveContainer" containerID="546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.444614 4593 scope.go:117] "RemoveContainer" containerID="e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" Jan 29 11:43:08 crc kubenswrapper[4593]: E0129 11:43:08.445183 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8\": container with ID starting with e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8 not found: ID does not exist" containerID="e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445241 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8"} err="failed to get container status \"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8\": rpc error: code = NotFound desc = could not find container \"e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8\": container with ID starting with e0a4de155f1706caf798b8e641d8c88b3dc6c8fdf3467bc7dc36324000f96fb8 not found: ID does not exist" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445272 4593 scope.go:117] "RemoveContainer" containerID="4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a" Jan 29 11:43:08 crc kubenswrapper[4593]: E0129 11:43:08.445559 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a\": container with ID starting with 4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a not found: ID does not exist" containerID="4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445583 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a"} err="failed to get container status \"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a\": rpc error: code = NotFound desc = could not find container \"4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a\": container with ID starting with 4466b7438953b7729477426864e06f6fc38e7a892c32b2c67a1c6fcbc7d9910a not found: ID does not exist" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445597 4593 scope.go:117] "RemoveContainer" containerID="546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee" Jan 29 11:43:08 crc kubenswrapper[4593]: E0129 11:43:08.445872 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee\": container with ID starting with 546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee not found: ID does not exist" containerID="546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee" Jan 29 11:43:08 crc kubenswrapper[4593]: I0129 11:43:08.445892 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee"} err="failed to get container status \"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee\": rpc error: code = NotFound desc = could not find container \"546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee\": container with ID starting with 546a626c87c51d9828f84ff955e47e34632be3a5a97a906e461655256d23a2ee not found: ID does not exist" Jan 29 11:43:09 crc kubenswrapper[4593]: I0129 11:43:09.087553 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e48707-1458-40da-aa50-9f79ccef1297" path="/var/lib/kubelet/pods/69e48707-1458-40da-aa50-9f79ccef1297/volumes" Jan 29 11:43:10 crc kubenswrapper[4593]: I0129 11:43:10.075198 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:43:10 crc kubenswrapper[4593]: E0129 11:43:10.075449 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:43:23 crc kubenswrapper[4593]: I0129 11:43:23.075322 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:43:23 crc kubenswrapper[4593]: E0129 11:43:23.076213 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:43:38 crc kubenswrapper[4593]: I0129 11:43:38.076337 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:43:38 crc kubenswrapper[4593]: E0129 11:43:38.076979 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:43:46 crc kubenswrapper[4593]: I0129 11:43:46.685436 4593 generic.go:334] "Generic (PLEG): container finished" podID="f45f3aca-42e1-4105-b843-f5288550ce8c" containerID="b57226db838e93862713f292f9315141a4f22f891753ea3cbd93990d176edcc4" exitCode=0 Jan 29 11:43:46 crc kubenswrapper[4593]: I0129 11:43:46.685550 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" event={"ID":"f45f3aca-42e1-4105-b843-f5288550ce8c","Type":"ContainerDied","Data":"b57226db838e93862713f292f9315141a4f22f891753ea3cbd93990d176edcc4"} Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.705284 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" event={"ID":"f45f3aca-42e1-4105-b843-f5288550ce8c","Type":"ContainerDied","Data":"3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2"} Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.705828 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3286f80b88576b78785252947cf8aa107ce6da1b610419348066eb6fc41347d2" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.705939 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854013 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854079 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854138 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854161 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854243 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854268 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854384 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854407 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.854445 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") pod \"f45f3aca-42e1-4105-b843-f5288550ce8c\" (UID: \"f45f3aca-42e1-4105-b843-f5288550ce8c\") " Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.873544 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m" (OuterVolumeSpecName: "kube-api-access-wjx4m") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "kube-api-access-wjx4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.880019 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.881869 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.891463 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.894532 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.896978 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.897902 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.898775 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.911387 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory" (OuterVolumeSpecName: "inventory") pod "f45f3aca-42e1-4105-b843-f5288550ce8c" (UID: "f45f3aca-42e1-4105-b843-f5288550ce8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956198 4593 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956247 4593 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956259 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjx4m\" (UniqueName: \"kubernetes.io/projected/f45f3aca-42e1-4105-b843-f5288550ce8c-kube-api-access-wjx4m\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956273 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956285 4593 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956297 4593 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956308 4593 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956322 4593 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:48 crc kubenswrapper[4593]: I0129 11:43:48.956335 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f45f3aca-42e1-4105-b843-f5288550ce8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.715942 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rtfdg" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.880672 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz"] Jan 29 11:43:49 crc kubenswrapper[4593]: E0129 11:43:49.881516 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881548 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" Jan 29 11:43:49 crc kubenswrapper[4593]: E0129 11:43:49.881563 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45f3aca-42e1-4105-b843-f5288550ce8c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881571 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45f3aca-42e1-4105-b843-f5288550ce8c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 11:43:49 crc kubenswrapper[4593]: E0129 11:43:49.881597 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="extract-content" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881603 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="extract-content" Jan 29 11:43:49 crc kubenswrapper[4593]: E0129 11:43:49.881618 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="extract-utilities" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881625 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="extract-utilities" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881880 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e48707-1458-40da-aa50-9f79ccef1297" containerName="registry-server" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.881900 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45f3aca-42e1-4105-b843-f5288550ce8c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.882698 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.887236 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w4p8f" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.887461 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.889169 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz"] Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.889352 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.889359 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.891773 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.975756 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.975807 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.975850 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.976015 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.976060 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.976096 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:49 crc kubenswrapper[4593]: I0129 11:43:49.976314 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.077825 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.077877 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.077901 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.077968 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.078000 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.078027 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.078060 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.082746 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.082947 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.084037 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.084960 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.086179 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.086475 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.101286 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.208046 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.556260 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz"] Jan 29 11:43:50 crc kubenswrapper[4593]: I0129 11:43:50.725499 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" event={"ID":"ee0ea7fe-3ea4-4944-8101-b03f1566882f","Type":"ContainerStarted","Data":"059bd591328bff46e6e65cfb00889c1f2fc8ff93c51a070940e99bbd963791fa"} Jan 29 11:43:51 crc kubenswrapper[4593]: I0129 11:43:51.075825 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:43:51 crc kubenswrapper[4593]: E0129 11:43:51.076147 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:43:51 crc kubenswrapper[4593]: I0129 11:43:51.734621 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" event={"ID":"ee0ea7fe-3ea4-4944-8101-b03f1566882f","Type":"ContainerStarted","Data":"f616db1f2537dd79ee16bc7d11fbdfb4f7448ae23d7f778070810ae6e0373cc3"} Jan 29 11:43:51 crc kubenswrapper[4593]: I0129 11:43:51.774208 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" podStartSLOduration=2.370623561 podStartE2EDuration="2.774174985s" podCreationTimestamp="2026-01-29 11:43:49 +0000 UTC" firstStartedPulling="2026-01-29 11:43:50.563841883 +0000 UTC m=+2696.436876094" lastFinishedPulling="2026-01-29 11:43:50.967393317 +0000 UTC m=+2696.840427518" observedRunningTime="2026-01-29 11:43:51.766169808 +0000 UTC m=+2697.639204009" watchObservedRunningTime="2026-01-29 11:43:51.774174985 +0000 UTC m=+2697.647209176" Jan 29 11:44:03 crc kubenswrapper[4593]: I0129 11:44:03.075255 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:44:03 crc kubenswrapper[4593]: E0129 11:44:03.075891 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:44:16 crc kubenswrapper[4593]: I0129 11:44:16.074505 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:44:16 crc kubenswrapper[4593]: I0129 11:44:16.983095 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63"} Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.148766 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.150317 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.153146 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.153854 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.170555 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.267095 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.267160 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.267264 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.368721 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.368763 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.368817 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.369746 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.375474 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.386268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") pod \"collect-profiles-29494785-5jqfl\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.481151 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:00 crc kubenswrapper[4593]: I0129 11:45:00.939792 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 11:45:01 crc kubenswrapper[4593]: I0129 11:45:01.410375 4593 generic.go:334] "Generic (PLEG): container finished" podID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" containerID="774b5de0fbc462ffcb1b94ee57144a8198c30add9d0ae3a9eee99f2a26a14b82" exitCode=0 Jan 29 11:45:01 crc kubenswrapper[4593]: I0129 11:45:01.410480 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" event={"ID":"dc4e2861-f7e0-40bb-bb77-b0fdd3498554","Type":"ContainerDied","Data":"774b5de0fbc462ffcb1b94ee57144a8198c30add9d0ae3a9eee99f2a26a14b82"} Jan 29 11:45:01 crc kubenswrapper[4593]: I0129 11:45:01.410752 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" event={"ID":"dc4e2861-f7e0-40bb-bb77-b0fdd3498554","Type":"ContainerStarted","Data":"c88db5300c04314732be5ce93aae32e7d41e372a77e36185fe67c16c38035005"} Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.733966 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.816677 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") pod \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.816865 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") pod \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.817053 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") pod \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\" (UID: \"dc4e2861-f7e0-40bb-bb77-b0fdd3498554\") " Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.818020 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc4e2861-f7e0-40bb-bb77-b0fdd3498554" (UID: "dc4e2861-f7e0-40bb-bb77-b0fdd3498554"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.822541 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc4e2861-f7e0-40bb-bb77-b0fdd3498554" (UID: "dc4e2861-f7e0-40bb-bb77-b0fdd3498554"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.845843 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp" (OuterVolumeSpecName: "kube-api-access-nh6zp") pod "dc4e2861-f7e0-40bb-bb77-b0fdd3498554" (UID: "dc4e2861-f7e0-40bb-bb77-b0fdd3498554"). InnerVolumeSpecName "kube-api-access-nh6zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.919818 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.920141 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh6zp\" (UniqueName: \"kubernetes.io/projected/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-kube-api-access-nh6zp\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:02 crc kubenswrapper[4593]: I0129 11:45:02.920246 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc4e2861-f7e0-40bb-bb77-b0fdd3498554-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.429069 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.429029 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl" event={"ID":"dc4e2861-f7e0-40bb-bb77-b0fdd3498554","Type":"ContainerDied","Data":"c88db5300c04314732be5ce93aae32e7d41e372a77e36185fe67c16c38035005"} Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.429942 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88db5300c04314732be5ce93aae32e7d41e372a77e36185fe67c16c38035005" Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.832967 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:45:03 crc kubenswrapper[4593]: I0129 11:45:03.842131 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494740-bkdhm"] Jan 29 11:45:05 crc kubenswrapper[4593]: I0129 11:45:05.109678 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eef5dc1f-d576-46dd-9de7-2a63c6d4157f" path="/var/lib/kubelet/pods/eef5dc1f-d576-46dd-9de7-2a63c6d4157f/volumes" Jan 29 11:45:15 crc kubenswrapper[4593]: I0129 11:45:15.111358 4593 scope.go:117] "RemoveContainer" containerID="a42849f610d885535cd0e60eaaa2528c5e1fd8e251ed36cfc95a9501172d4972" Jan 29 11:46:33 crc kubenswrapper[4593]: I0129 11:46:33.946417 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:46:33 crc kubenswrapper[4593]: I0129 11:46:33.947138 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:47:03 crc kubenswrapper[4593]: I0129 11:47:03.945762 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:47:03 crc kubenswrapper[4593]: I0129 11:47:03.946400 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:47:17 crc kubenswrapper[4593]: I0129 11:47:17.031941 4593 generic.go:334] "Generic (PLEG): container finished" podID="ee0ea7fe-3ea4-4944-8101-b03f1566882f" containerID="f616db1f2537dd79ee16bc7d11fbdfb4f7448ae23d7f778070810ae6e0373cc3" exitCode=0 Jan 29 11:47:17 crc kubenswrapper[4593]: I0129 11:47:17.033131 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" event={"ID":"ee0ea7fe-3ea4-4944-8101-b03f1566882f","Type":"ContainerDied","Data":"f616db1f2537dd79ee16bc7d11fbdfb4f7448ae23d7f778070810ae6e0373cc3"} Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.520492 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668171 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668271 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668312 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668365 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668407 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668460 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.668536 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") pod \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\" (UID: \"ee0ea7fe-3ea4-4944-8101-b03f1566882f\") " Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.679057 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf" (OuterVolumeSpecName: "kube-api-access-sfxjf") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "kube-api-access-sfxjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.679389 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.697906 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.703920 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.708657 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.720818 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.728721 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory" (OuterVolumeSpecName: "inventory") pod "ee0ea7fe-3ea4-4944-8101-b03f1566882f" (UID: "ee0ea7fe-3ea4-4944-8101-b03f1566882f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773014 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfxjf\" (UniqueName: \"kubernetes.io/projected/ee0ea7fe-3ea4-4944-8101-b03f1566882f-kube-api-access-sfxjf\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773065 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773080 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773108 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773122 4593 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773168 4593 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-inventory\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:18 crc kubenswrapper[4593]: I0129 11:47:18.773183 4593 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee0ea7fe-3ea4-4944-8101-b03f1566882f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 11:47:19 crc kubenswrapper[4593]: I0129 11:47:19.053511 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" Jan 29 11:47:19 crc kubenswrapper[4593]: I0129 11:47:19.053265 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz" event={"ID":"ee0ea7fe-3ea4-4944-8101-b03f1566882f","Type":"ContainerDied","Data":"059bd591328bff46e6e65cfb00889c1f2fc8ff93c51a070940e99bbd963791fa"} Jan 29 11:47:19 crc kubenswrapper[4593]: I0129 11:47:19.053603 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="059bd591328bff46e6e65cfb00889c1f2fc8ff93c51a070940e99bbd963791fa" Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.947877 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.950004 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.950106 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.951140 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:47:33 crc kubenswrapper[4593]: I0129 11:47:33.951227 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63" gracePeriod=600 Jan 29 11:47:34 crc kubenswrapper[4593]: I0129 11:47:34.210833 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63" exitCode=0 Jan 29 11:47:34 crc kubenswrapper[4593]: I0129 11:47:34.211172 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63"} Jan 29 11:47:34 crc kubenswrapper[4593]: I0129 11:47:34.211255 4593 scope.go:117] "RemoveContainer" containerID="3b4224a6440a519ec04885f2f21052e97ed79a8b26a7b05432f460f058a977ed" Jan 29 11:47:35 crc kubenswrapper[4593]: I0129 11:47:35.221536 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2"} Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.151359 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 11:48:21 crc kubenswrapper[4593]: E0129 11:48:21.153408 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee0ea7fe-3ea4-4944-8101-b03f1566882f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.153513 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee0ea7fe-3ea4-4944-8101-b03f1566882f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 11:48:21 crc kubenswrapper[4593]: E0129 11:48:21.153586 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" containerName="collect-profiles" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.153677 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" containerName="collect-profiles" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.154031 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee0ea7fe-3ea4-4944-8101-b03f1566882f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.154505 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" containerName="collect-profiles" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.155288 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.159288 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.163100 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.167435 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vt7mb" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.168012 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.177307 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233450 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233507 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233565 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233729 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233769 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233897 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233924 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.233994 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336310 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336399 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336464 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336533 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336569 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336606 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336746 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336789 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.336860 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.337167 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.337181 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.337804 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.338087 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.343344 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.343486 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.344528 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.350667 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.356916 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.378505 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.477454 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.958516 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 29 11:48:21 crc kubenswrapper[4593]: I0129 11:48:21.962886 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:48:22 crc kubenswrapper[4593]: I0129 11:48:22.648275 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5ea9892-a149-4cfe-bb9c-ef636eacd125","Type":"ContainerStarted","Data":"bf88caa96b3fd17945a137b250bf9d7f8872b0e8469ad3aa1ab198d63888646d"} Jan 29 11:49:19 crc kubenswrapper[4593]: E0129 11:49:19.962646 4593 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 29 11:49:19 crc kubenswrapper[4593]: E0129 11:49:19.966491 4593 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs2hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d5ea9892-a149-4cfe-bb9c-ef636eacd125): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 11:49:19 crc kubenswrapper[4593]: E0129 11:49:19.967765 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" Jan 29 11:49:20 crc kubenswrapper[4593]: E0129 11:49:20.251467 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" Jan 29 11:49:35 crc kubenswrapper[4593]: I0129 11:49:35.605380 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 29 11:49:37 crc kubenswrapper[4593]: I0129 11:49:37.447161 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5ea9892-a149-4cfe-bb9c-ef636eacd125","Type":"ContainerStarted","Data":"f1bbc49dcc0cd36e38a7fd4617bfb0fd01fe811e0e734a91b4f25ae6b23bbeaf"} Jan 29 11:49:37 crc kubenswrapper[4593]: I0129 11:49:37.473016 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.833671652 podStartE2EDuration="1m17.472982474s" podCreationTimestamp="2026-01-29 11:48:20 +0000 UTC" firstStartedPulling="2026-01-29 11:48:21.962536529 +0000 UTC m=+2967.835570720" lastFinishedPulling="2026-01-29 11:49:35.601847351 +0000 UTC m=+3041.474881542" observedRunningTime="2026-01-29 11:49:37.470279221 +0000 UTC m=+3043.343313412" watchObservedRunningTime="2026-01-29 11:49:37.472982474 +0000 UTC m=+3043.346016665" Jan 29 11:50:03 crc kubenswrapper[4593]: I0129 11:50:03.946459 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:50:03 crc kubenswrapper[4593]: I0129 11:50:03.947148 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.349678 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.353269 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.415810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.415901 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.415939 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.502499 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.517393 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.518010 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.517592 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.518475 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.518837 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.551595 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") pod \"redhat-marketplace-gjxww\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:29 crc kubenswrapper[4593]: I0129 11:50:29.687996 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:30 crc kubenswrapper[4593]: I0129 11:50:30.496658 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.115898 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.118877 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.123297 4593 generic.go:334] "Generic (PLEG): container finished" podID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerID="93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189" exitCode=0 Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.123376 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerDied","Data":"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189"} Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.123423 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerStarted","Data":"9c4bf50beffc67a77f212f98f53ffeb5265c547884bf5bccd7cd8cbcbe7a9fa7"} Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.135754 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.269246 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.269557 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.269708 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.371991 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.372064 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.372122 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.372663 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.373214 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.398043 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") pod \"certified-operators-58nql\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.467098 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.775733 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.781730 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.790777 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.885035 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.885254 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.885293 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.988601 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.988678 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.988779 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.989112 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:31 crc kubenswrapper[4593]: I0129 11:50:31.989245 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.027887 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.032157 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") pod \"community-operators-9chvf\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.126063 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.158341 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerStarted","Data":"6ace7fce8dca888321cdd4f035fa5e56a84f122f5c45639df165368111d7df69"} Jan 29 11:50:32 crc kubenswrapper[4593]: I0129 11:50:32.700258 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.177162 4593 generic.go:334] "Generic (PLEG): container finished" podID="0c132853-6130-49f2-a704-a03e51d90d5b" containerID="24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f" exitCode=0 Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.178468 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerDied","Data":"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f"} Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.178502 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerStarted","Data":"4577186316c08b3900720726645ed16abaae0f401c8a9700e23d4a86b7c97742"} Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.180805 4593 generic.go:334] "Generic (PLEG): container finished" podID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerID="8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2" exitCode=0 Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.180936 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerDied","Data":"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2"} Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.945960 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:50:33 crc kubenswrapper[4593]: I0129 11:50:33.946364 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:50:34 crc kubenswrapper[4593]: I0129 11:50:34.205022 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerStarted","Data":"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300"} Jan 29 11:50:34 crc kubenswrapper[4593]: I0129 11:50:34.211846 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerStarted","Data":"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3"} Jan 29 11:50:36 crc kubenswrapper[4593]: I0129 11:50:36.230111 4593 generic.go:334] "Generic (PLEG): container finished" podID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerID="0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300" exitCode=0 Jan 29 11:50:36 crc kubenswrapper[4593]: I0129 11:50:36.231594 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerDied","Data":"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300"} Jan 29 11:50:36 crc kubenswrapper[4593]: I0129 11:50:36.234997 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerStarted","Data":"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d"} Jan 29 11:50:39 crc kubenswrapper[4593]: I0129 11:50:39.278149 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerStarted","Data":"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87"} Jan 29 11:50:39 crc kubenswrapper[4593]: I0129 11:50:39.304705 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gjxww" podStartSLOduration=3.591708669 podStartE2EDuration="10.304670768s" podCreationTimestamp="2026-01-29 11:50:29 +0000 UTC" firstStartedPulling="2026-01-29 11:50:31.127002968 +0000 UTC m=+3097.000037149" lastFinishedPulling="2026-01-29 11:50:37.839965057 +0000 UTC m=+3103.712999248" observedRunningTime="2026-01-29 11:50:39.30325248 +0000 UTC m=+3105.176286671" watchObservedRunningTime="2026-01-29 11:50:39.304670768 +0000 UTC m=+3105.177704959" Jan 29 11:50:39 crc kubenswrapper[4593]: I0129 11:50:39.688280 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:39 crc kubenswrapper[4593]: I0129 11:50:39.688334 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:40 crc kubenswrapper[4593]: I0129 11:50:40.289414 4593 generic.go:334] "Generic (PLEG): container finished" podID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerID="90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3" exitCode=0 Jan 29 11:50:40 crc kubenswrapper[4593]: I0129 11:50:40.289477 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerDied","Data":"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3"} Jan 29 11:50:40 crc kubenswrapper[4593]: I0129 11:50:40.738170 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gjxww" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:40 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:40 crc kubenswrapper[4593]: > Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.302323 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerStarted","Data":"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815"} Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.305515 4593 generic.go:334] "Generic (PLEG): container finished" podID="0c132853-6130-49f2-a704-a03e51d90d5b" containerID="2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d" exitCode=0 Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.305573 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerDied","Data":"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d"} Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.394009 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58nql" podStartSLOduration=2.861035154 podStartE2EDuration="10.393985819s" podCreationTimestamp="2026-01-29 11:50:31 +0000 UTC" firstStartedPulling="2026-01-29 11:50:33.194396984 +0000 UTC m=+3099.067431175" lastFinishedPulling="2026-01-29 11:50:40.727347649 +0000 UTC m=+3106.600381840" observedRunningTime="2026-01-29 11:50:41.363862381 +0000 UTC m=+3107.236896572" watchObservedRunningTime="2026-01-29 11:50:41.393985819 +0000 UTC m=+3107.267020020" Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.467684 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:41 crc kubenswrapper[4593]: I0129 11:50:41.468010 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:50:42 crc kubenswrapper[4593]: I0129 11:50:42.564919 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-58nql" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:42 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:42 crc kubenswrapper[4593]: > Jan 29 11:50:43 crc kubenswrapper[4593]: I0129 11:50:43.326575 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerStarted","Data":"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554"} Jan 29 11:50:43 crc kubenswrapper[4593]: I0129 11:50:43.386796 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9chvf" podStartSLOduration=2.637036507 podStartE2EDuration="12.386773561s" podCreationTimestamp="2026-01-29 11:50:31 +0000 UTC" firstStartedPulling="2026-01-29 11:50:33.194190228 +0000 UTC m=+3099.067224419" lastFinishedPulling="2026-01-29 11:50:42.943927282 +0000 UTC m=+3108.816961473" observedRunningTime="2026-01-29 11:50:43.382476994 +0000 UTC m=+3109.255511185" watchObservedRunningTime="2026-01-29 11:50:43.386773561 +0000 UTC m=+3109.259807752" Jan 29 11:50:50 crc kubenswrapper[4593]: I0129 11:50:50.735990 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gjxww" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:50 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:50 crc kubenswrapper[4593]: > Jan 29 11:50:52 crc kubenswrapper[4593]: I0129 11:50:52.126898 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:52 crc kubenswrapper[4593]: I0129 11:50:52.126960 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:50:52 crc kubenswrapper[4593]: I0129 11:50:52.542105 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-58nql" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:52 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:52 crc kubenswrapper[4593]: > Jan 29 11:50:53 crc kubenswrapper[4593]: I0129 11:50:53.185167 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9chvf" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" probeResult="failure" output=< Jan 29 11:50:53 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:50:53 crc kubenswrapper[4593]: > Jan 29 11:50:59 crc kubenswrapper[4593]: I0129 11:50:59.742093 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:50:59 crc kubenswrapper[4593]: I0129 11:50:59.810537 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:51:00 crc kubenswrapper[4593]: I0129 11:51:00.541427 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:51:01 crc kubenswrapper[4593]: I0129 11:51:01.502192 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gjxww" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" containerID="cri-o://de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" gracePeriod=2 Jan 29 11:51:01 crc kubenswrapper[4593]: I0129 11:51:01.527985 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:51:01 crc kubenswrapper[4593]: I0129 11:51:01.580906 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.189458 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.222218 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.258823 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.320104 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") pod \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.320246 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") pod \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.320374 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") pod \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\" (UID: \"8e6133a0-5080-40db-ab5c-3f6e365b33f0\") " Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.321247 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities" (OuterVolumeSpecName: "utilities") pod "8e6133a0-5080-40db-ab5c-3f6e365b33f0" (UID: "8e6133a0-5080-40db-ab5c-3f6e365b33f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.322795 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.337969 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2" (OuterVolumeSpecName: "kube-api-access-vbrl2") pod "8e6133a0-5080-40db-ab5c-3f6e365b33f0" (UID: "8e6133a0-5080-40db-ab5c-3f6e365b33f0"). InnerVolumeSpecName "kube-api-access-vbrl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.354896 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e6133a0-5080-40db-ab5c-3f6e365b33f0" (UID: "8e6133a0-5080-40db-ab5c-3f6e365b33f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.424649 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbrl2\" (UniqueName: \"kubernetes.io/projected/8e6133a0-5080-40db-ab5c-3f6e365b33f0-kube-api-access-vbrl2\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.424688 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e6133a0-5080-40db-ab5c-3f6e365b33f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.512830 4593 generic.go:334] "Generic (PLEG): container finished" podID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerID="de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" exitCode=0 Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.512907 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gjxww" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.512989 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerDied","Data":"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87"} Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.513074 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gjxww" event={"ID":"8e6133a0-5080-40db-ab5c-3f6e365b33f0","Type":"ContainerDied","Data":"9c4bf50beffc67a77f212f98f53ffeb5265c547884bf5bccd7cd8cbcbe7a9fa7"} Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.513995 4593 scope.go:117] "RemoveContainer" containerID="de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.538350 4593 scope.go:117] "RemoveContainer" containerID="0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.566799 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.607567 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gjxww"] Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.626034 4593 scope.go:117] "RemoveContainer" containerID="93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.670849 4593 scope.go:117] "RemoveContainer" containerID="de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" Jan 29 11:51:02 crc kubenswrapper[4593]: E0129 11:51:02.671361 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87\": container with ID starting with de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87 not found: ID does not exist" containerID="de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.671419 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87"} err="failed to get container status \"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87\": rpc error: code = NotFound desc = could not find container \"de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87\": container with ID starting with de554a557ce317ac571576096879ffae4aa252bb0b2231e33badc615f0df1f87 not found: ID does not exist" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.671441 4593 scope.go:117] "RemoveContainer" containerID="0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300" Jan 29 11:51:02 crc kubenswrapper[4593]: E0129 11:51:02.671788 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300\": container with ID starting with 0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300 not found: ID does not exist" containerID="0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.671830 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300"} err="failed to get container status \"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300\": rpc error: code = NotFound desc = could not find container \"0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300\": container with ID starting with 0a1d0389a4fb73d32a71e925c8059f2300f339d328b9785ed5b7568503bed300 not found: ID does not exist" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.671848 4593 scope.go:117] "RemoveContainer" containerID="93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189" Jan 29 11:51:02 crc kubenswrapper[4593]: E0129 11:51:02.672479 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189\": container with ID starting with 93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189 not found: ID does not exist" containerID="93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189" Jan 29 11:51:02 crc kubenswrapper[4593]: I0129 11:51:02.672508 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189"} err="failed to get container status \"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189\": rpc error: code = NotFound desc = could not find container \"93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189\": container with ID starting with 93462b5084ae427e1d77c6129f4f72a1b2c59194dab25968640d88484e1a9189 not found: ID does not exist" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.087260 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" path="/var/lib/kubelet/pods/8e6133a0-5080-40db-ab5c-3f6e365b33f0/volumes" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.335827 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.525949 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58nql" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" containerID="cri-o://5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" gracePeriod=2 Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.949779 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.950026 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.950073 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.950905 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:51:03 crc kubenswrapper[4593]: I0129 11:51:03.950952 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" gracePeriod=600 Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.092148 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.251149 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.400594 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") pod \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.400818 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") pod \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.400842 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") pod \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\" (UID: \"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb\") " Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.403251 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities" (OuterVolumeSpecName: "utilities") pod "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" (UID: "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.407079 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk" (OuterVolumeSpecName: "kube-api-access-6jpgk") pod "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" (UID: "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb"). InnerVolumeSpecName "kube-api-access-6jpgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.473458 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" (UID: "a1f44c51-4d7a-46f4-9840-a5ba6f763fbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.503423 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.503469 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.503485 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jpgk\" (UniqueName: \"kubernetes.io/projected/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb-kube-api-access-6jpgk\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.548850 4593 generic.go:334] "Generic (PLEG): container finished" podID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerID="5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" exitCode=0 Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.548943 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerDied","Data":"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815"} Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.548975 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58nql" event={"ID":"a1f44c51-4d7a-46f4-9840-a5ba6f763fbb","Type":"ContainerDied","Data":"6ace7fce8dca888321cdd4f035fa5e56a84f122f5c45639df165368111d7df69"} Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.548997 4593 scope.go:117] "RemoveContainer" containerID="5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.549139 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58nql" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.575775 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" exitCode=0 Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.577760 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2"} Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.583420 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.587010 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.594501 4593 scope.go:117] "RemoveContainer" containerID="90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.613586 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.628815 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58nql"] Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.659888 4593 scope.go:117] "RemoveContainer" containerID="8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.741214 4593 scope.go:117] "RemoveContainer" containerID="5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.741751 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815\": container with ID starting with 5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815 not found: ID does not exist" containerID="5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.741794 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815"} err="failed to get container status \"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815\": rpc error: code = NotFound desc = could not find container \"5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815\": container with ID starting with 5767804e82bb2d97f0d917bf6baa492ae58ff5955034f15cb300cec81e6d1815 not found: ID does not exist" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.741822 4593 scope.go:117] "RemoveContainer" containerID="90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3" Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.742045 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3\": container with ID starting with 90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3 not found: ID does not exist" containerID="90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.742065 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3"} err="failed to get container status \"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3\": rpc error: code = NotFound desc = could not find container \"90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3\": container with ID starting with 90d216c36c6523394d38d71d98b69d37fc329b8967543ce7e528940ec7a880f3 not found: ID does not exist" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.742081 4593 scope.go:117] "RemoveContainer" containerID="8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2" Jan 29 11:51:04 crc kubenswrapper[4593]: E0129 11:51:04.743513 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2\": container with ID starting with 8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2 not found: ID does not exist" containerID="8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.743556 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2"} err="failed to get container status \"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2\": rpc error: code = NotFound desc = could not find container \"8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2\": container with ID starting with 8eff945346ca74495c997e723f02a87cff7567624b559464a980efc5b2e563d2 not found: ID does not exist" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.743572 4593 scope.go:117] "RemoveContainer" containerID="bfb82950e01f3d639ea66fd0ea5efa40eb790dae9af6d7372f3c56962ee7ab63" Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.755368 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:51:04 crc kubenswrapper[4593]: I0129 11:51:04.755677 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9chvf" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" containerID="cri-o://cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" gracePeriod=2 Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.099046 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" path="/var/lib/kubelet/pods/a1f44c51-4d7a-46f4-9840-a5ba6f763fbb/volumes" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.430249 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.530070 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") pod \"0c132853-6130-49f2-a704-a03e51d90d5b\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.530238 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") pod \"0c132853-6130-49f2-a704-a03e51d90d5b\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.530267 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") pod \"0c132853-6130-49f2-a704-a03e51d90d5b\" (UID: \"0c132853-6130-49f2-a704-a03e51d90d5b\") " Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.530921 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities" (OuterVolumeSpecName: "utilities") pod "0c132853-6130-49f2-a704-a03e51d90d5b" (UID: "0c132853-6130-49f2-a704-a03e51d90d5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.560830 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr" (OuterVolumeSpecName: "kube-api-access-8jdrr") pod "0c132853-6130-49f2-a704-a03e51d90d5b" (UID: "0c132853-6130-49f2-a704-a03e51d90d5b"). InnerVolumeSpecName "kube-api-access-8jdrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594305 4593 generic.go:334] "Generic (PLEG): container finished" podID="0c132853-6130-49f2-a704-a03e51d90d5b" containerID="cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" exitCode=0 Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594504 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerDied","Data":"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554"} Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594534 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9chvf" event={"ID":"0c132853-6130-49f2-a704-a03e51d90d5b","Type":"ContainerDied","Data":"4577186316c08b3900720726645ed16abaae0f401c8a9700e23d4a86b7c97742"} Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594551 4593 scope.go:117] "RemoveContainer" containerID="cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.594671 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9chvf" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.628878 4593 scope.go:117] "RemoveContainer" containerID="2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.633889 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jdrr\" (UniqueName: \"kubernetes.io/projected/0c132853-6130-49f2-a704-a03e51d90d5b-kube-api-access-8jdrr\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.633912 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.636186 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c132853-6130-49f2-a704-a03e51d90d5b" (UID: "0c132853-6130-49f2-a704-a03e51d90d5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.670973 4593 scope.go:117] "RemoveContainer" containerID="24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.735523 4593 scope.go:117] "RemoveContainer" containerID="cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.735975 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c132853-6130-49f2-a704-a03e51d90d5b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:51:05 crc kubenswrapper[4593]: E0129 11:51:05.739916 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554\": container with ID starting with cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554 not found: ID does not exist" containerID="cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.740028 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554"} err="failed to get container status \"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554\": rpc error: code = NotFound desc = could not find container \"cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554\": container with ID starting with cb8987f0fd8fc0fa7983abe27e210b86779bfa8f385a1745413e32ba05c15554 not found: ID does not exist" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.740061 4593 scope.go:117] "RemoveContainer" containerID="2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d" Jan 29 11:51:05 crc kubenswrapper[4593]: E0129 11:51:05.740515 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d\": container with ID starting with 2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d not found: ID does not exist" containerID="2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.740555 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d"} err="failed to get container status \"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d\": rpc error: code = NotFound desc = could not find container \"2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d\": container with ID starting with 2755b91fc56bd5e51826f581dce4aa09be5824296781872b46d3ba1906a7a99d not found: ID does not exist" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.740574 4593 scope.go:117] "RemoveContainer" containerID="24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f" Jan 29 11:51:05 crc kubenswrapper[4593]: E0129 11:51:05.741230 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f\": container with ID starting with 24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f not found: ID does not exist" containerID="24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.741284 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f"} err="failed to get container status \"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f\": rpc error: code = NotFound desc = could not find container \"24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f\": container with ID starting with 24f34bbff12e05d5eb5edd518ac64bab37b7a12315260730a2c81b88f2777b1f not found: ID does not exist" Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.932333 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:51:05 crc kubenswrapper[4593]: I0129 11:51:05.943344 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9chvf"] Jan 29 11:51:07 crc kubenswrapper[4593]: I0129 11:51:07.089472 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" path="/var/lib/kubelet/pods/0c132853-6130-49f2-a704-a03e51d90d5b/volumes" Jan 29 11:51:19 crc kubenswrapper[4593]: I0129 11:51:19.075739 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:19 crc kubenswrapper[4593]: E0129 11:51:19.077619 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:32 crc kubenswrapper[4593]: I0129 11:51:32.074531 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:32 crc kubenswrapper[4593]: E0129 11:51:32.075242 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:44 crc kubenswrapper[4593]: I0129 11:51:44.075300 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:44 crc kubenswrapper[4593]: E0129 11:51:44.076192 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:51:58 crc kubenswrapper[4593]: I0129 11:51:58.075923 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:51:58 crc kubenswrapper[4593]: E0129 11:51:58.076819 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:52:09 crc kubenswrapper[4593]: I0129 11:52:09.075056 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:52:09 crc kubenswrapper[4593]: E0129 11:52:09.075907 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:52:22 crc kubenswrapper[4593]: I0129 11:52:22.075299 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:52:22 crc kubenswrapper[4593]: E0129 11:52:22.076057 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:52:36 crc kubenswrapper[4593]: I0129 11:52:36.075511 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:52:36 crc kubenswrapper[4593]: E0129 11:52:36.076275 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:52:49 crc kubenswrapper[4593]: I0129 11:52:49.075800 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:52:49 crc kubenswrapper[4593]: E0129 11:52:49.076757 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:01 crc kubenswrapper[4593]: I0129 11:53:01.075604 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:01 crc kubenswrapper[4593]: E0129 11:53:01.076431 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:12 crc kubenswrapper[4593]: I0129 11:53:12.075111 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:12 crc kubenswrapper[4593]: E0129 11:53:12.076997 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:27 crc kubenswrapper[4593]: I0129 11:53:27.075234 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:27 crc kubenswrapper[4593]: E0129 11:53:27.076217 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:39 crc kubenswrapper[4593]: I0129 11:53:39.076363 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:39 crc kubenswrapper[4593]: E0129 11:53:39.077379 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:53:53 crc kubenswrapper[4593]: I0129 11:53:53.074797 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:53:53 crc kubenswrapper[4593]: E0129 11:53:53.076733 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:08 crc kubenswrapper[4593]: I0129 11:54:08.075387 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:54:08 crc kubenswrapper[4593]: E0129 11:54:08.076346 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:23 crc kubenswrapper[4593]: I0129 11:54:23.075136 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:54:23 crc kubenswrapper[4593]: E0129 11:54:23.075920 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.306026 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307107 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307138 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307152 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307158 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307178 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307188 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307196 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307209 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307223 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307228 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307235 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307241 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307254 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307260 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="extract-utilities" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307271 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307278 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="extract-content" Jan 29 11:54:27 crc kubenswrapper[4593]: E0129 11:54:27.307290 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307298 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307537 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c132853-6130-49f2-a704-a03e51d90d5b" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307557 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6133a0-5080-40db-ab5c-3f6e365b33f0" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.307572 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1f44c51-4d7a-46f4-9840-a5ba6f763fbb" containerName="registry-server" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.308992 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.333417 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.470473 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.470827 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.471239 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.572919 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.573057 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.573134 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.573701 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.573826 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.604394 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") pod \"redhat-operators-bddf7\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:27 crc kubenswrapper[4593]: I0129 11:54:27.640131 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.207190 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.537533 4593 generic.go:334] "Generic (PLEG): container finished" podID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerID="bc929a23cf8d1038032aac760cbbd186410de536e009c9bb9f788f8fc8527d9a" exitCode=0 Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.537578 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerDied","Data":"bc929a23cf8d1038032aac760cbbd186410de536e009c9bb9f788f8fc8527d9a"} Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.537611 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerStarted","Data":"a8f692cc178e40d6dd2a183f0f930fe61616b4622888a4583e31fe0b88efede4"} Jan 29 11:54:28 crc kubenswrapper[4593]: I0129 11:54:28.540863 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 11:54:31 crc kubenswrapper[4593]: I0129 11:54:31.575796 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerStarted","Data":"0521cd49bca7037f1b806186b9b5d16633c8ca28d994c0657a3e91d697c24158"} Jan 29 11:54:34 crc kubenswrapper[4593]: I0129 11:54:34.075042 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:54:34 crc kubenswrapper[4593]: E0129 11:54:34.075616 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:40 crc kubenswrapper[4593]: I0129 11:54:40.664970 4593 generic.go:334] "Generic (PLEG): container finished" podID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerID="0521cd49bca7037f1b806186b9b5d16633c8ca28d994c0657a3e91d697c24158" exitCode=0 Jan 29 11:54:40 crc kubenswrapper[4593]: I0129 11:54:40.665084 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerDied","Data":"0521cd49bca7037f1b806186b9b5d16633c8ca28d994c0657a3e91d697c24158"} Jan 29 11:54:42 crc kubenswrapper[4593]: I0129 11:54:42.684780 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerStarted","Data":"4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464"} Jan 29 11:54:42 crc kubenswrapper[4593]: I0129 11:54:42.714318 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bddf7" podStartSLOduration=2.78948601 podStartE2EDuration="15.71426674s" podCreationTimestamp="2026-01-29 11:54:27 +0000 UTC" firstStartedPulling="2026-01-29 11:54:28.540334287 +0000 UTC m=+3334.413368478" lastFinishedPulling="2026-01-29 11:54:41.465115017 +0000 UTC m=+3347.338149208" observedRunningTime="2026-01-29 11:54:42.712472922 +0000 UTC m=+3348.585507113" watchObservedRunningTime="2026-01-29 11:54:42.71426674 +0000 UTC m=+3348.587300941" Jan 29 11:54:46 crc kubenswrapper[4593]: I0129 11:54:46.075356 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:54:46 crc kubenswrapper[4593]: E0129 11:54:46.075941 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:54:47 crc kubenswrapper[4593]: I0129 11:54:47.640255 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:47 crc kubenswrapper[4593]: I0129 11:54:47.640598 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:54:48 crc kubenswrapper[4593]: I0129 11:54:48.705389 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bddf7" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" probeResult="failure" output=< Jan 29 11:54:48 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:54:48 crc kubenswrapper[4593]: > Jan 29 11:54:58 crc kubenswrapper[4593]: I0129 11:54:58.695256 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bddf7" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" probeResult="failure" output=< Jan 29 11:54:58 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:54:58 crc kubenswrapper[4593]: > Jan 29 11:55:01 crc kubenswrapper[4593]: I0129 11:55:01.074774 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:01 crc kubenswrapper[4593]: E0129 11:55:01.075374 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:55:08 crc kubenswrapper[4593]: I0129 11:55:08.692431 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bddf7" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" probeResult="failure" output=< Jan 29 11:55:08 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 11:55:08 crc kubenswrapper[4593]: > Jan 29 11:55:13 crc kubenswrapper[4593]: I0129 11:55:13.076678 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:13 crc kubenswrapper[4593]: E0129 11:55:13.077568 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:55:17 crc kubenswrapper[4593]: I0129 11:55:17.692667 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:55:17 crc kubenswrapper[4593]: I0129 11:55:17.749684 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:55:17 crc kubenswrapper[4593]: I0129 11:55:17.946107 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:55:19 crc kubenswrapper[4593]: I0129 11:55:18.999717 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bddf7" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" containerID="cri-o://4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464" gracePeriod=2 Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.017471 4593 generic.go:334] "Generic (PLEG): container finished" podID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerID="4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464" exitCode=0 Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.017763 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerDied","Data":"4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464"} Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.334288 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.491104 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") pod \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.491221 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") pod \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.491269 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") pod \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\" (UID: \"e3ea983b-a914-4260-9fe2-8fa75d2f1e08\") " Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.492374 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities" (OuterVolumeSpecName: "utilities") pod "e3ea983b-a914-4260-9fe2-8fa75d2f1e08" (UID: "e3ea983b-a914-4260-9fe2-8fa75d2f1e08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.531947 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk" (OuterVolumeSpecName: "kube-api-access-c9qsk") pod "e3ea983b-a914-4260-9fe2-8fa75d2f1e08" (UID: "e3ea983b-a914-4260-9fe2-8fa75d2f1e08"). InnerVolumeSpecName "kube-api-access-c9qsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.594530 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.594566 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9qsk\" (UniqueName: \"kubernetes.io/projected/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-kube-api-access-c9qsk\") on node \"crc\" DevicePath \"\"" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.635581 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3ea983b-a914-4260-9fe2-8fa75d2f1e08" (UID: "e3ea983b-a914-4260-9fe2-8fa75d2f1e08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 11:55:20 crc kubenswrapper[4593]: I0129 11:55:20.699179 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ea983b-a914-4260-9fe2-8fa75d2f1e08-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.028833 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bddf7" event={"ID":"e3ea983b-a914-4260-9fe2-8fa75d2f1e08","Type":"ContainerDied","Data":"a8f692cc178e40d6dd2a183f0f930fe61616b4622888a4583e31fe0b88efede4"} Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.028896 4593 scope.go:117] "RemoveContainer" containerID="4b024635f7e0a2041ba01fa5476ffe66122cc8f456ae02dcda2d58882e40c464" Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.030091 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bddf7" Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.054251 4593 scope.go:117] "RemoveContainer" containerID="0521cd49bca7037f1b806186b9b5d16633c8ca28d994c0657a3e91d697c24158" Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.073779 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.103810 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bddf7"] Jan 29 11:55:21 crc kubenswrapper[4593]: I0129 11:55:21.119272 4593 scope.go:117] "RemoveContainer" containerID="bc929a23cf8d1038032aac760cbbd186410de536e009c9bb9f788f8fc8527d9a" Jan 29 11:55:23 crc kubenswrapper[4593]: I0129 11:55:23.086379 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" path="/var/lib/kubelet/pods/e3ea983b-a914-4260-9fe2-8fa75d2f1e08/volumes" Jan 29 11:55:25 crc kubenswrapper[4593]: I0129 11:55:25.082895 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:25 crc kubenswrapper[4593]: E0129 11:55:25.083479 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:55:36 crc kubenswrapper[4593]: I0129 11:55:36.075002 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:36 crc kubenswrapper[4593]: E0129 11:55:36.075929 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:55:50 crc kubenswrapper[4593]: I0129 11:55:50.075532 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:55:50 crc kubenswrapper[4593]: E0129 11:55:50.076459 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 11:56:05 crc kubenswrapper[4593]: I0129 11:56:05.082389 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 11:56:05 crc kubenswrapper[4593]: I0129 11:56:05.425606 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1"} Jan 29 11:58:33 crc kubenswrapper[4593]: I0129 11:58:33.947177 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:58:33 crc kubenswrapper[4593]: I0129 11:58:33.948009 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:03 crc kubenswrapper[4593]: I0129 11:59:03.946657 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:59:03 crc kubenswrapper[4593]: I0129 11:59:03.947346 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.946318 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.947131 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.947234 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.948159 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 11:59:33 crc kubenswrapper[4593]: I0129 11:59:33.948235 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1" gracePeriod=600 Jan 29 11:59:34 crc kubenswrapper[4593]: I0129 11:59:34.652832 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1" exitCode=0 Jan 29 11:59:34 crc kubenswrapper[4593]: I0129 11:59:34.652900 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1"} Jan 29 11:59:34 crc kubenswrapper[4593]: I0129 11:59:34.653452 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0"} Jan 29 11:59:34 crc kubenswrapper[4593]: I0129 11:59:34.653520 4593 scope.go:117] "RemoveContainer" containerID="e6722d2d5154ddb14b4e2303d08080ea93d25791e68de64e92824bd70e0808f2" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.200906 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9"] Jan 29 12:00:00 crc kubenswrapper[4593]: E0129 12:00:00.202067 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="extract-utilities" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.202100 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="extract-utilities" Jan 29 12:00:00 crc kubenswrapper[4593]: E0129 12:00:00.202133 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.202142 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" Jan 29 12:00:00 crc kubenswrapper[4593]: E0129 12:00:00.202160 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="extract-content" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.202168 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="extract-content" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.202398 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ea983b-a914-4260-9fe2-8fa75d2f1e08" containerName="registry-server" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.203228 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.206228 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.206714 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.246733 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9"] Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.333147 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.333242 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.333286 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.434995 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.435061 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.435091 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.436596 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.475786 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.482172 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") pod \"collect-profiles-29494800-kdpv9\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:00 crc kubenswrapper[4593]: I0129 12:00:00.543346 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:01 crc kubenswrapper[4593]: I0129 12:00:01.220431 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9"] Jan 29 12:00:01 crc kubenswrapper[4593]: I0129 12:00:01.911083 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" event={"ID":"88bca612-672a-4f26-8d39-7fde2a190cca","Type":"ContainerStarted","Data":"c1728aeb51c3b8fb22eb3ef7139e5d2760bf904fa43fbe1defddfdb72c433cb4"} Jan 29 12:00:01 crc kubenswrapper[4593]: I0129 12:00:01.911127 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" event={"ID":"88bca612-672a-4f26-8d39-7fde2a190cca","Type":"ContainerStarted","Data":"b9b1d235b3bafaa96859a822c6375bf05a330d7acc37ead49553ec9eb4fafcd4"} Jan 29 12:00:01 crc kubenswrapper[4593]: I0129 12:00:01.935336 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" podStartSLOduration=1.935301767 podStartE2EDuration="1.935301767s" podCreationTimestamp="2026-01-29 12:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:00:01.927328811 +0000 UTC m=+3667.800363002" watchObservedRunningTime="2026-01-29 12:00:01.935301767 +0000 UTC m=+3667.808335958" Jan 29 12:00:02 crc kubenswrapper[4593]: I0129 12:00:02.921415 4593 generic.go:334] "Generic (PLEG): container finished" podID="88bca612-672a-4f26-8d39-7fde2a190cca" containerID="c1728aeb51c3b8fb22eb3ef7139e5d2760bf904fa43fbe1defddfdb72c433cb4" exitCode=0 Jan 29 12:00:02 crc kubenswrapper[4593]: I0129 12:00:02.921473 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" event={"ID":"88bca612-672a-4f26-8d39-7fde2a190cca","Type":"ContainerDied","Data":"c1728aeb51c3b8fb22eb3ef7139e5d2760bf904fa43fbe1defddfdb72c433cb4"} Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.527163 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.633332 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") pod \"88bca612-672a-4f26-8d39-7fde2a190cca\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.633479 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") pod \"88bca612-672a-4f26-8d39-7fde2a190cca\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.633515 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") pod \"88bca612-672a-4f26-8d39-7fde2a190cca\" (UID: \"88bca612-672a-4f26-8d39-7fde2a190cca\") " Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.634378 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume" (OuterVolumeSpecName: "config-volume") pod "88bca612-672a-4f26-8d39-7fde2a190cca" (UID: "88bca612-672a-4f26-8d39-7fde2a190cca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.642251 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "88bca612-672a-4f26-8d39-7fde2a190cca" (UID: "88bca612-672a-4f26-8d39-7fde2a190cca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.642946 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6" (OuterVolumeSpecName: "kube-api-access-nhpl6") pod "88bca612-672a-4f26-8d39-7fde2a190cca" (UID: "88bca612-672a-4f26-8d39-7fde2a190cca"). InnerVolumeSpecName "kube-api-access-nhpl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.736473 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88bca612-672a-4f26-8d39-7fde2a190cca-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.736874 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88bca612-672a-4f26-8d39-7fde2a190cca-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.736919 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhpl6\" (UniqueName: \"kubernetes.io/projected/88bca612-672a-4f26-8d39-7fde2a190cca-kube-api-access-nhpl6\") on node \"crc\" DevicePath \"\"" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.945367 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" event={"ID":"88bca612-672a-4f26-8d39-7fde2a190cca","Type":"ContainerDied","Data":"b9b1d235b3bafaa96859a822c6375bf05a330d7acc37ead49553ec9eb4fafcd4"} Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.945455 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9b1d235b3bafaa96859a822c6375bf05a330d7acc37ead49553ec9eb4fafcd4" Jan 29 12:00:04 crc kubenswrapper[4593]: I0129 12:00:04.945548 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-kdpv9" Jan 29 12:00:05 crc kubenswrapper[4593]: I0129 12:00:05.622250 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 12:00:05 crc kubenswrapper[4593]: I0129 12:00:05.632459 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494755-htvh8"] Jan 29 12:00:07 crc kubenswrapper[4593]: I0129 12:00:07.087055 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d624d92-85b0-48dc-94f4-047ac84aaa0c" path="/var/lib/kubelet/pods/8d624d92-85b0-48dc-94f4-047ac84aaa0c/volumes" Jan 29 12:00:16 crc kubenswrapper[4593]: I0129 12:00:16.406622 4593 scope.go:117] "RemoveContainer" containerID="c821139e8b0317636f7e45a909cbff9ea156a76bb671f91a36836e985d04e36c" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.003767 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:00:52 crc kubenswrapper[4593]: E0129 12:00:52.004889 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88bca612-672a-4f26-8d39-7fde2a190cca" containerName="collect-profiles" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.004908 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="88bca612-672a-4f26-8d39-7fde2a190cca" containerName="collect-profiles" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.005167 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="88bca612-672a-4f26-8d39-7fde2a190cca" containerName="collect-profiles" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.006912 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.015119 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.151652 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.152354 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.152571 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.254606 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.254698 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.254778 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.255346 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.255418 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.277151 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") pod \"certified-operators-nmvmp\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.355775 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:00:52 crc kubenswrapper[4593]: I0129 12:00:52.940984 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:00:53 crc kubenswrapper[4593]: I0129 12:00:53.409509 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerID="ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae" exitCode=0 Jan 29 12:00:53 crc kubenswrapper[4593]: I0129 12:00:53.409835 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerDied","Data":"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae"} Jan 29 12:00:53 crc kubenswrapper[4593]: I0129 12:00:53.410063 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerStarted","Data":"57a10fdf5b721a0b423550e25c12e2cc02e30dd94c94225a8018e4ccd80601d0"} Jan 29 12:00:53 crc kubenswrapper[4593]: I0129 12:00:53.416719 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:00:56 crc kubenswrapper[4593]: I0129 12:00:56.440973 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerStarted","Data":"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1"} Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.184689 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29494801-8jgxn"] Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.186236 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.197241 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494801-8jgxn"] Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.198202 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.198272 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.198397 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.198450 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.317976 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.318086 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.318249 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.318336 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.331989 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.333575 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.350814 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.353908 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") pod \"keystone-cron-29494801-8jgxn\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:00 crc kubenswrapper[4593]: I0129 12:01:00.506312 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:01 crc kubenswrapper[4593]: I0129 12:01:01.124799 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29494801-8jgxn"] Jan 29 12:01:01 crc kubenswrapper[4593]: I0129 12:01:01.555076 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494801-8jgxn" event={"ID":"f7d47080-9737-4b86-9e40-a6c6bf7f1709","Type":"ContainerStarted","Data":"d5dcebdff1872143a7baa5b2f3daf0b82ebdcad3fdc1e3124fd8cbb11c7b3339"} Jan 29 12:01:02 crc kubenswrapper[4593]: I0129 12:01:02.567182 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494801-8jgxn" event={"ID":"f7d47080-9737-4b86-9e40-a6c6bf7f1709","Type":"ContainerStarted","Data":"c4f23aad4e75d53e9867238c0a4577c6262c2408292cb4cc450a9a2b02c73f78"} Jan 29 12:01:02 crc kubenswrapper[4593]: I0129 12:01:02.572171 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerID="b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1" exitCode=0 Jan 29 12:01:02 crc kubenswrapper[4593]: I0129 12:01:02.572253 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerDied","Data":"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1"} Jan 29 12:01:02 crc kubenswrapper[4593]: I0129 12:01:02.589901 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29494801-8jgxn" podStartSLOduration=2.589876692 podStartE2EDuration="2.589876692s" podCreationTimestamp="2026-01-29 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:01:02.588286489 +0000 UTC m=+3728.461320680" watchObservedRunningTime="2026-01-29 12:01:02.589876692 +0000 UTC m=+3728.462910893" Jan 29 12:01:03 crc kubenswrapper[4593]: I0129 12:01:03.588394 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerStarted","Data":"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3"} Jan 29 12:01:03 crc kubenswrapper[4593]: I0129 12:01:03.625837 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmvmp" podStartSLOduration=2.919507389 podStartE2EDuration="12.625812366s" podCreationTimestamp="2026-01-29 12:00:51 +0000 UTC" firstStartedPulling="2026-01-29 12:00:53.416302106 +0000 UTC m=+3719.289336297" lastFinishedPulling="2026-01-29 12:01:03.122607073 +0000 UTC m=+3728.995641274" observedRunningTime="2026-01-29 12:01:03.623963736 +0000 UTC m=+3729.496997957" watchObservedRunningTime="2026-01-29 12:01:03.625812366 +0000 UTC m=+3729.498846557" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.587916 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.590540 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.612074 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.665499 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.665894 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.666096 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.768081 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.768651 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.768740 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.768738 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.769214 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.808947 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") pod \"community-operators-69vh6\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:04 crc kubenswrapper[4593]: I0129 12:01:04.916132 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:05 crc kubenswrapper[4593]: I0129 12:01:05.612887 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:06 crc kubenswrapper[4593]: I0129 12:01:06.628568 4593 generic.go:334] "Generic (PLEG): container finished" podID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerID="dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b" exitCode=0 Jan 29 12:01:06 crc kubenswrapper[4593]: I0129 12:01:06.628687 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerDied","Data":"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b"} Jan 29 12:01:06 crc kubenswrapper[4593]: I0129 12:01:06.629108 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerStarted","Data":"21eb6256f05b21a81f3c529ef18a59cfba08db30ea0577b58f3b450ba62f0f3f"} Jan 29 12:01:08 crc kubenswrapper[4593]: I0129 12:01:08.649554 4593 generic.go:334] "Generic (PLEG): container finished" podID="f7d47080-9737-4b86-9e40-a6c6bf7f1709" containerID="c4f23aad4e75d53e9867238c0a4577c6262c2408292cb4cc450a9a2b02c73f78" exitCode=0 Jan 29 12:01:08 crc kubenswrapper[4593]: I0129 12:01:08.649671 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494801-8jgxn" event={"ID":"f7d47080-9737-4b86-9e40-a6c6bf7f1709","Type":"ContainerDied","Data":"c4f23aad4e75d53e9867238c0a4577c6262c2408292cb4cc450a9a2b02c73f78"} Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.204824 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.389240 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") pod \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.389312 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") pod \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.389457 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") pod \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.389544 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") pod \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\" (UID: \"f7d47080-9737-4b86-9e40-a6c6bf7f1709\") " Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.675985 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29494801-8jgxn" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.677172 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29494801-8jgxn" event={"ID":"f7d47080-9737-4b86-9e40-a6c6bf7f1709","Type":"ContainerDied","Data":"d5dcebdff1872143a7baa5b2f3daf0b82ebdcad3fdc1e3124fd8cbb11c7b3339"} Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.677246 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5dcebdff1872143a7baa5b2f3daf0b82ebdcad3fdc1e3124fd8cbb11c7b3339" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.867594 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24" (OuterVolumeSpecName: "kube-api-access-cxj24") pod "f7d47080-9737-4b86-9e40-a6c6bf7f1709" (UID: "f7d47080-9737-4b86-9e40-a6c6bf7f1709"). InnerVolumeSpecName "kube-api-access-cxj24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.876544 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f7d47080-9737-4b86-9e40-a6c6bf7f1709" (UID: "f7d47080-9737-4b86-9e40-a6c6bf7f1709"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.900388 4593 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.900422 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxj24\" (UniqueName: \"kubernetes.io/projected/f7d47080-9737-4b86-9e40-a6c6bf7f1709-kube-api-access-cxj24\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.958785 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7d47080-9737-4b86-9e40-a6c6bf7f1709" (UID: "f7d47080-9737-4b86-9e40-a6c6bf7f1709"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:01:10 crc kubenswrapper[4593]: I0129 12:01:10.959045 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data" (OuterVolumeSpecName: "config-data") pod "f7d47080-9737-4b86-9e40-a6c6bf7f1709" (UID: "f7d47080-9737-4b86-9e40-a6c6bf7f1709"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:01:11 crc kubenswrapper[4593]: I0129 12:01:11.002566 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:11 crc kubenswrapper[4593]: I0129 12:01:11.002603 4593 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7d47080-9737-4b86-9e40-a6c6bf7f1709-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:11 crc kubenswrapper[4593]: I0129 12:01:11.687294 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerStarted","Data":"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e"} Jan 29 12:01:12 crc kubenswrapper[4593]: I0129 12:01:12.356726 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:12 crc kubenswrapper[4593]: I0129 12:01:12.356780 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:12 crc kubenswrapper[4593]: I0129 12:01:12.411158 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:13 crc kubenswrapper[4593]: I0129 12:01:13.010572 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:13 crc kubenswrapper[4593]: I0129 12:01:13.071510 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:01:13 crc kubenswrapper[4593]: I0129 12:01:13.723352 4593 generic.go:334] "Generic (PLEG): container finished" podID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerID="e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e" exitCode=0 Jan 29 12:01:13 crc kubenswrapper[4593]: I0129 12:01:13.723380 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerDied","Data":"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e"} Jan 29 12:01:14 crc kubenswrapper[4593]: I0129 12:01:14.735517 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerStarted","Data":"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f"} Jan 29 12:01:14 crc kubenswrapper[4593]: I0129 12:01:14.735682 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmvmp" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="registry-server" containerID="cri-o://f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" gracePeriod=2 Jan 29 12:01:14 crc kubenswrapper[4593]: I0129 12:01:14.917003 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:14 crc kubenswrapper[4593]: I0129 12:01:14.917067 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.447353 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.470025 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") pod \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.470119 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") pod \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.470179 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") pod \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\" (UID: \"fd4958b5-6b8b-4701-854c-5fffd4db0e4c\") " Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.471072 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities" (OuterVolumeSpecName: "utilities") pod "fd4958b5-6b8b-4701-854c-5fffd4db0e4c" (UID: "fd4958b5-6b8b-4701-854c-5fffd4db0e4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.476892 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r" (OuterVolumeSpecName: "kube-api-access-qd27r") pod "fd4958b5-6b8b-4701-854c-5fffd4db0e4c" (UID: "fd4958b5-6b8b-4701-854c-5fffd4db0e4c"). InnerVolumeSpecName "kube-api-access-qd27r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.484913 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-69vh6" podStartSLOduration=3.9887837360000002 podStartE2EDuration="11.484888011s" podCreationTimestamp="2026-01-29 12:01:04 +0000 UTC" firstStartedPulling="2026-01-29 12:01:06.63067129 +0000 UTC m=+3732.503705481" lastFinishedPulling="2026-01-29 12:01:14.126775565 +0000 UTC m=+3739.999809756" observedRunningTime="2026-01-29 12:01:14.771817146 +0000 UTC m=+3740.644851337" watchObservedRunningTime="2026-01-29 12:01:15.484888011 +0000 UTC m=+3741.357922222" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.532318 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd4958b5-6b8b-4701-854c-5fffd4db0e4c" (UID: "fd4958b5-6b8b-4701-854c-5fffd4db0e4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.574019 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd27r\" (UniqueName: \"kubernetes.io/projected/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-kube-api-access-qd27r\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.574060 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.574071 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd4958b5-6b8b-4701-854c-5fffd4db0e4c-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.760484 4593 generic.go:334] "Generic (PLEG): container finished" podID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerID="f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" exitCode=0 Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.771003 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerDied","Data":"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3"} Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.771233 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmvmp" event={"ID":"fd4958b5-6b8b-4701-854c-5fffd4db0e4c","Type":"ContainerDied","Data":"57a10fdf5b721a0b423550e25c12e2cc02e30dd94c94225a8018e4ccd80601d0"} Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.771300 4593 scope.go:117] "RemoveContainer" containerID="f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.892063 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmvmp" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.951290 4593 scope.go:117] "RemoveContainer" containerID="b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1" Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.980248 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-69vh6" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" probeResult="failure" output=< Jan 29 12:01:15 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:01:15 crc kubenswrapper[4593]: > Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.987708 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:01:15 crc kubenswrapper[4593]: I0129 12:01:15.995863 4593 scope.go:117] "RemoveContainer" containerID="ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.008518 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmvmp"] Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.043760 4593 scope.go:117] "RemoveContainer" containerID="f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" Jan 29 12:01:16 crc kubenswrapper[4593]: E0129 12:01:16.044552 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3\": container with ID starting with f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3 not found: ID does not exist" containerID="f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.044600 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3"} err="failed to get container status \"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3\": rpc error: code = NotFound desc = could not find container \"f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3\": container with ID starting with f6c9ab006bfb5d3794f55ceff75f7522c034bb42f3cc7f70c3559e9b852871f3 not found: ID does not exist" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.044642 4593 scope.go:117] "RemoveContainer" containerID="b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1" Jan 29 12:01:16 crc kubenswrapper[4593]: E0129 12:01:16.047077 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1\": container with ID starting with b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1 not found: ID does not exist" containerID="b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.047151 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1"} err="failed to get container status \"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1\": rpc error: code = NotFound desc = could not find container \"b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1\": container with ID starting with b5bb6fcab278c8884cb954e49a60be00084a42dcef19ad25b4a6ea7d8710ceb1 not found: ID does not exist" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.047187 4593 scope.go:117] "RemoveContainer" containerID="ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae" Jan 29 12:01:16 crc kubenswrapper[4593]: E0129 12:01:16.050836 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae\": container with ID starting with ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae not found: ID does not exist" containerID="ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae" Jan 29 12:01:16 crc kubenswrapper[4593]: I0129 12:01:16.051076 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae"} err="failed to get container status \"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae\": rpc error: code = NotFound desc = could not find container \"ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae\": container with ID starting with ebc6457b2420fee9f914c3931d6ac7886197195125ce55b8488658480f1e8fae not found: ID does not exist" Jan 29 12:01:17 crc kubenswrapper[4593]: I0129 12:01:17.090195 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" path="/var/lib/kubelet/pods/fd4958b5-6b8b-4701-854c-5fffd4db0e4c/volumes" Jan 29 12:01:24 crc kubenswrapper[4593]: I0129 12:01:24.969477 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:25 crc kubenswrapper[4593]: I0129 12:01:25.028246 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:25 crc kubenswrapper[4593]: I0129 12:01:25.208250 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:26 crc kubenswrapper[4593]: I0129 12:01:26.901410 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-69vh6" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" containerID="cri-o://bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" gracePeriod=2 Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.484824 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.651114 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") pod \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.651212 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") pod \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.651462 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") pod \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\" (UID: \"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a\") " Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.652648 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities" (OuterVolumeSpecName: "utilities") pod "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" (UID: "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.673654 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.692850 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv" (OuterVolumeSpecName: "kube-api-access-z8pmv") pod "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" (UID: "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a"). InnerVolumeSpecName "kube-api-access-z8pmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.746345 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" (UID: "1c76ee6e-190d-4dcf-9aa4-62557c0ee07a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.746896 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747341 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="extract-utilities" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747359 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="extract-utilities" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747380 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="extract-content" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747390 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="extract-content" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747404 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d47080-9737-4b86-9e40-a6c6bf7f1709" containerName="keystone-cron" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747409 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d47080-9737-4b86-9e40-a6c6bf7f1709" containerName="keystone-cron" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747417 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="extract-content" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747423 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="extract-content" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747435 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747441 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747463 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747470 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: E0129 12:01:27.747482 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="extract-utilities" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747488 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="extract-utilities" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747700 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4958b5-6b8b-4701-854c-5fffd4db0e4c" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747717 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerName="registry-server" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.747729 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d47080-9737-4b86-9e40-a6c6bf7f1709" containerName="keystone-cron" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.749779 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.758969 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777321 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777435 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777515 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777658 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8pmv\" (UniqueName: \"kubernetes.io/projected/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-kube-api-access-z8pmv\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.777682 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.879357 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.879443 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.879487 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.879862 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.880165 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.898426 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") pod \"redhat-marketplace-smnz5\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912260 4593 generic.go:334] "Generic (PLEG): container finished" podID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" containerID="bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" exitCode=0 Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912315 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerDied","Data":"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f"} Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912337 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-69vh6" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912355 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-69vh6" event={"ID":"1c76ee6e-190d-4dcf-9aa4-62557c0ee07a","Type":"ContainerDied","Data":"21eb6256f05b21a81f3c529ef18a59cfba08db30ea0577b58f3b450ba62f0f3f"} Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.912380 4593 scope.go:117] "RemoveContainer" containerID="bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.965537 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.967301 4593 scope.go:117] "RemoveContainer" containerID="e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e" Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.983067 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-69vh6"] Jan 29 12:01:27 crc kubenswrapper[4593]: I0129 12:01:27.989712 4593 scope.go:117] "RemoveContainer" containerID="dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.010487 4593 scope.go:117] "RemoveContainer" containerID="bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" Jan 29 12:01:28 crc kubenswrapper[4593]: E0129 12:01:28.011115 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f\": container with ID starting with bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f not found: ID does not exist" containerID="bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.011349 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f"} err="failed to get container status \"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f\": rpc error: code = NotFound desc = could not find container \"bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f\": container with ID starting with bf5ccb85076ad56cb114d65b43dfb0cde9c800efbd2209ba3f59888cc0edaa6f not found: ID does not exist" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.011375 4593 scope.go:117] "RemoveContainer" containerID="e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e" Jan 29 12:01:28 crc kubenswrapper[4593]: E0129 12:01:28.011893 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e\": container with ID starting with e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e not found: ID does not exist" containerID="e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.011916 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e"} err="failed to get container status \"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e\": rpc error: code = NotFound desc = could not find container \"e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e\": container with ID starting with e64072a85fcee74e04835683a420cbde4a1984942a0cfff52032e3eb93b67c5e not found: ID does not exist" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.011933 4593 scope.go:117] "RemoveContainer" containerID="dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b" Jan 29 12:01:28 crc kubenswrapper[4593]: E0129 12:01:28.012184 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b\": container with ID starting with dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b not found: ID does not exist" containerID="dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.012216 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b"} err="failed to get container status \"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b\": rpc error: code = NotFound desc = could not find container \"dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b\": container with ID starting with dbd38eb8e7e4acf4e95c3c0522d3597248765922ad27202f1c27e877d32b2c0b not found: ID does not exist" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.085364 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.613523 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.927324 4593 generic.go:334] "Generic (PLEG): container finished" podID="179a9993-2883-4f19-9c6e-694735342028" containerID="2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590" exitCode=0 Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.927399 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerDied","Data":"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590"} Jan 29 12:01:28 crc kubenswrapper[4593]: I0129 12:01:28.927429 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerStarted","Data":"d4efa2dcc8fad3c1791de98ad732751b7ce7b129092b4c0370f8969d147c47ee"} Jan 29 12:01:29 crc kubenswrapper[4593]: I0129 12:01:29.087185 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c76ee6e-190d-4dcf-9aa4-62557c0ee07a" path="/var/lib/kubelet/pods/1c76ee6e-190d-4dcf-9aa4-62557c0ee07a/volumes" Jan 29 12:01:30 crc kubenswrapper[4593]: I0129 12:01:30.955420 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerStarted","Data":"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44"} Jan 29 12:01:31 crc kubenswrapper[4593]: I0129 12:01:31.968140 4593 generic.go:334] "Generic (PLEG): container finished" podID="179a9993-2883-4f19-9c6e-694735342028" containerID="e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44" exitCode=0 Jan 29 12:01:31 crc kubenswrapper[4593]: I0129 12:01:31.968220 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerDied","Data":"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44"} Jan 29 12:01:32 crc kubenswrapper[4593]: I0129 12:01:32.982536 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerStarted","Data":"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625"} Jan 29 12:01:33 crc kubenswrapper[4593]: I0129 12:01:33.011695 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-smnz5" podStartSLOduration=2.443026248 podStartE2EDuration="6.011649474s" podCreationTimestamp="2026-01-29 12:01:27 +0000 UTC" firstStartedPulling="2026-01-29 12:01:28.930648837 +0000 UTC m=+3754.803683028" lastFinishedPulling="2026-01-29 12:01:32.499272063 +0000 UTC m=+3758.372306254" observedRunningTime="2026-01-29 12:01:33.0007874 +0000 UTC m=+3758.873821591" watchObservedRunningTime="2026-01-29 12:01:33.011649474 +0000 UTC m=+3758.884683675" Jan 29 12:01:38 crc kubenswrapper[4593]: I0129 12:01:38.086532 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:38 crc kubenswrapper[4593]: I0129 12:01:38.087395 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:38 crc kubenswrapper[4593]: I0129 12:01:38.251038 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:39 crc kubenswrapper[4593]: I0129 12:01:39.090553 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:39 crc kubenswrapper[4593]: I0129 12:01:39.144304 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.057813 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-smnz5" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="registry-server" containerID="cri-o://c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" gracePeriod=2 Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.751278 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.858021 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") pod \"179a9993-2883-4f19-9c6e-694735342028\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.858275 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") pod \"179a9993-2883-4f19-9c6e-694735342028\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.858482 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") pod \"179a9993-2883-4f19-9c6e-694735342028\" (UID: \"179a9993-2883-4f19-9c6e-694735342028\") " Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.859868 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities" (OuterVolumeSpecName: "utilities") pod "179a9993-2883-4f19-9c6e-694735342028" (UID: "179a9993-2883-4f19-9c6e-694735342028"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.887087 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "179a9993-2883-4f19-9c6e-694735342028" (UID: "179a9993-2883-4f19-9c6e-694735342028"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.887475 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6" (OuterVolumeSpecName: "kube-api-access-jb5b6") pod "179a9993-2883-4f19-9c6e-694735342028" (UID: "179a9993-2883-4f19-9c6e-694735342028"). InnerVolumeSpecName "kube-api-access-jb5b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.960754 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.961070 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/179a9993-2883-4f19-9c6e-694735342028-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:41 crc kubenswrapper[4593]: I0129 12:01:41.961082 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb5b6\" (UniqueName: \"kubernetes.io/projected/179a9993-2883-4f19-9c6e-694735342028-kube-api-access-jb5b6\") on node \"crc\" DevicePath \"\"" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069434 4593 generic.go:334] "Generic (PLEG): container finished" podID="179a9993-2883-4f19-9c6e-694735342028" containerID="c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" exitCode=0 Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069481 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smnz5" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069486 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerDied","Data":"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625"} Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069516 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smnz5" event={"ID":"179a9993-2883-4f19-9c6e-694735342028","Type":"ContainerDied","Data":"d4efa2dcc8fad3c1791de98ad732751b7ce7b129092b4c0370f8969d147c47ee"} Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.069553 4593 scope.go:117] "RemoveContainer" containerID="c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.093306 4593 scope.go:117] "RemoveContainer" containerID="e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.104780 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.124598 4593 scope.go:117] "RemoveContainer" containerID="2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.145990 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-smnz5"] Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.176835 4593 scope.go:117] "RemoveContainer" containerID="c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" Jan 29 12:01:42 crc kubenswrapper[4593]: E0129 12:01:42.177449 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625\": container with ID starting with c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625 not found: ID does not exist" containerID="c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.177504 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625"} err="failed to get container status \"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625\": rpc error: code = NotFound desc = could not find container \"c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625\": container with ID starting with c78fa890dc0c14220e50f383ce30e2165a48033e23983baf5819b835a8e6d625 not found: ID does not exist" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.177532 4593 scope.go:117] "RemoveContainer" containerID="e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44" Jan 29 12:01:42 crc kubenswrapper[4593]: E0129 12:01:42.178010 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44\": container with ID starting with e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44 not found: ID does not exist" containerID="e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.178043 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44"} err="failed to get container status \"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44\": rpc error: code = NotFound desc = could not find container \"e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44\": container with ID starting with e7310fbd04ccf37ffbccc22a735af572920edf03a9c7dcdb62c68debd9dcdd44 not found: ID does not exist" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.178058 4593 scope.go:117] "RemoveContainer" containerID="2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590" Jan 29 12:01:42 crc kubenswrapper[4593]: E0129 12:01:42.179540 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590\": container with ID starting with 2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590 not found: ID does not exist" containerID="2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590" Jan 29 12:01:42 crc kubenswrapper[4593]: I0129 12:01:42.179580 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590"} err="failed to get container status \"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590\": rpc error: code = NotFound desc = could not find container \"2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590\": container with ID starting with 2f10096e94c62eedb1dbeceb665bc8f19c2daf35edddea65440c5db8583f1590 not found: ID does not exist" Jan 29 12:01:43 crc kubenswrapper[4593]: I0129 12:01:43.087966 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="179a9993-2883-4f19-9c6e-694735342028" path="/var/lib/kubelet/pods/179a9993-2883-4f19-9c6e-694735342028/volumes" Jan 29 12:02:03 crc kubenswrapper[4593]: I0129 12:02:03.946273 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:02:03 crc kubenswrapper[4593]: I0129 12:02:03.946848 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:02:33 crc kubenswrapper[4593]: I0129 12:02:33.946733 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:02:33 crc kubenswrapper[4593]: I0129 12:02:33.947235 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.946757 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.947332 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.947390 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.948265 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:03:03 crc kubenswrapper[4593]: I0129 12:03:03.948332 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" gracePeriod=600 Jan 29 12:03:04 crc kubenswrapper[4593]: E0129 12:03:04.078038 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:03:04 crc kubenswrapper[4593]: I0129 12:03:04.943438 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" exitCode=0 Jan 29 12:03:04 crc kubenswrapper[4593]: I0129 12:03:04.943493 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0"} Jan 29 12:03:04 crc kubenswrapper[4593]: I0129 12:03:04.943556 4593 scope.go:117] "RemoveContainer" containerID="e0a8bd46a646bdb78b7f5e35dccce37cceaacf8fb67f1dfa0ed9e182128af8b1" Jan 29 12:03:04 crc kubenswrapper[4593]: I0129 12:03:04.944418 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:03:04 crc kubenswrapper[4593]: E0129 12:03:04.944839 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:03:20 crc kubenswrapper[4593]: I0129 12:03:20.075190 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:03:20 crc kubenswrapper[4593]: E0129 12:03:20.076064 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:03:34 crc kubenswrapper[4593]: I0129 12:03:34.075486 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:03:34 crc kubenswrapper[4593]: E0129 12:03:34.076426 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:03:46 crc kubenswrapper[4593]: I0129 12:03:46.076141 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:03:46 crc kubenswrapper[4593]: E0129 12:03:46.078131 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:00 crc kubenswrapper[4593]: I0129 12:04:00.075309 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:00 crc kubenswrapper[4593]: E0129 12:04:00.077137 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:13 crc kubenswrapper[4593]: I0129 12:04:13.074977 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:13 crc kubenswrapper[4593]: E0129 12:04:13.078215 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:25 crc kubenswrapper[4593]: I0129 12:04:25.081964 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:25 crc kubenswrapper[4593]: E0129 12:04:25.082759 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:36 crc kubenswrapper[4593]: I0129 12:04:36.074956 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:36 crc kubenswrapper[4593]: E0129 12:04:36.075752 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:47 crc kubenswrapper[4593]: I0129 12:04:47.075188 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:47 crc kubenswrapper[4593]: E0129 12:04:47.075968 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:04:59 crc kubenswrapper[4593]: I0129 12:04:59.076837 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:04:59 crc kubenswrapper[4593]: E0129 12:04:59.077598 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.244591 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:05:06 crc kubenswrapper[4593]: E0129 12:05:06.245615 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="registry-server" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.245664 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="registry-server" Jan 29 12:05:06 crc kubenswrapper[4593]: E0129 12:05:06.245687 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="extract-utilities" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.245694 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="extract-utilities" Jan 29 12:05:06 crc kubenswrapper[4593]: E0129 12:05:06.245709 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="extract-content" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.245716 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="extract-content" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.245960 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="179a9993-2883-4f19-9c6e-694735342028" containerName="registry-server" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.247395 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.278599 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.305868 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.306027 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.306122 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.408223 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.408589 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.408667 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.408836 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.409326 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.437078 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") pod \"redhat-operators-k7vkk\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:06 crc kubenswrapper[4593]: I0129 12:05:06.582620 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:07 crc kubenswrapper[4593]: I0129 12:05:07.310177 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:05:08 crc kubenswrapper[4593]: I0129 12:05:08.044248 4593 generic.go:334] "Generic (PLEG): container finished" podID="67146159-618b-4376-89e9-4c4433776a79" containerID="8d22093bb0433d57ba4af0c4dc12d757c6b02132977c80845c4c07f793d8a283" exitCode=0 Jan 29 12:05:08 crc kubenswrapper[4593]: I0129 12:05:08.044333 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerDied","Data":"8d22093bb0433d57ba4af0c4dc12d757c6b02132977c80845c4c07f793d8a283"} Jan 29 12:05:08 crc kubenswrapper[4593]: I0129 12:05:08.045169 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerStarted","Data":"423b79897654c7bfeba89f8b2ffde23e4d2402031fa3c58273297441a72736dd"} Jan 29 12:05:09 crc kubenswrapper[4593]: I0129 12:05:09.060843 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerStarted","Data":"903179b1a5123d41188d675ef19e4b23549a769ed206e5aeb71733e3c6d173cd"} Jan 29 12:05:14 crc kubenswrapper[4593]: I0129 12:05:14.075244 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:05:14 crc kubenswrapper[4593]: E0129 12:05:14.076272 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:17 crc kubenswrapper[4593]: I0129 12:05:17.271341 4593 generic.go:334] "Generic (PLEG): container finished" podID="67146159-618b-4376-89e9-4c4433776a79" containerID="903179b1a5123d41188d675ef19e4b23549a769ed206e5aeb71733e3c6d173cd" exitCode=0 Jan 29 12:05:17 crc kubenswrapper[4593]: I0129 12:05:17.271672 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerDied","Data":"903179b1a5123d41188d675ef19e4b23549a769ed206e5aeb71733e3c6d173cd"} Jan 29 12:05:18 crc kubenswrapper[4593]: I0129 12:05:18.281316 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerStarted","Data":"fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52"} Jan 29 12:05:26 crc kubenswrapper[4593]: I0129 12:05:26.583732 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:26 crc kubenswrapper[4593]: I0129 12:05:26.584314 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:27 crc kubenswrapper[4593]: I0129 12:05:27.632567 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7vkk" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" probeResult="failure" output=< Jan 29 12:05:27 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:05:27 crc kubenswrapper[4593]: > Jan 29 12:05:29 crc kubenswrapper[4593]: I0129 12:05:29.074993 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:05:29 crc kubenswrapper[4593]: E0129 12:05:29.075540 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:37 crc kubenswrapper[4593]: I0129 12:05:37.634156 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7vkk" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" probeResult="failure" output=< Jan 29 12:05:37 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:05:37 crc kubenswrapper[4593]: > Jan 29 12:05:44 crc kubenswrapper[4593]: I0129 12:05:44.075784 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:05:44 crc kubenswrapper[4593]: E0129 12:05:44.076673 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:47 crc kubenswrapper[4593]: I0129 12:05:47.630193 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7vkk" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" probeResult="failure" output=< Jan 29 12:05:47 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:05:47 crc kubenswrapper[4593]: > Jan 29 12:05:56 crc kubenswrapper[4593]: I0129 12:05:56.647210 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:56 crc kubenswrapper[4593]: I0129 12:05:56.672443 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k7vkk" podStartSLOduration=41.003838628 podStartE2EDuration="50.672405524s" podCreationTimestamp="2026-01-29 12:05:06 +0000 UTC" firstStartedPulling="2026-01-29 12:05:08.048100037 +0000 UTC m=+3973.921134228" lastFinishedPulling="2026-01-29 12:05:17.716666933 +0000 UTC m=+3983.589701124" observedRunningTime="2026-01-29 12:05:18.304620909 +0000 UTC m=+3984.177655110" watchObservedRunningTime="2026-01-29 12:05:56.672405524 +0000 UTC m=+4022.545439715" Jan 29 12:05:56 crc kubenswrapper[4593]: I0129 12:05:56.698913 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:56 crc kubenswrapper[4593]: I0129 12:05:56.897384 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:05:57 crc kubenswrapper[4593]: I0129 12:05:57.074890 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:05:57 crc kubenswrapper[4593]: E0129 12:05:57.075218 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:05:58 crc kubenswrapper[4593]: I0129 12:05:58.083440 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k7vkk" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" containerID="cri-o://fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52" gracePeriod=2 Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.117840 4593 generic.go:334] "Generic (PLEG): container finished" podID="67146159-618b-4376-89e9-4c4433776a79" containerID="fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52" exitCode=0 Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.118064 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerDied","Data":"fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52"} Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.199607 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.322913 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") pod \"67146159-618b-4376-89e9-4c4433776a79\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.323085 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") pod \"67146159-618b-4376-89e9-4c4433776a79\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.323299 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") pod \"67146159-618b-4376-89e9-4c4433776a79\" (UID: \"67146159-618b-4376-89e9-4c4433776a79\") " Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.324924 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities" (OuterVolumeSpecName: "utilities") pod "67146159-618b-4376-89e9-4c4433776a79" (UID: "67146159-618b-4376-89e9-4c4433776a79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.332024 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5" (OuterVolumeSpecName: "kube-api-access-shcx5") pod "67146159-618b-4376-89e9-4c4433776a79" (UID: "67146159-618b-4376-89e9-4c4433776a79"). InnerVolumeSpecName "kube-api-access-shcx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.426324 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shcx5\" (UniqueName: \"kubernetes.io/projected/67146159-618b-4376-89e9-4c4433776a79-kube-api-access-shcx5\") on node \"crc\" DevicePath \"\"" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.426365 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.451947 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67146159-618b-4376-89e9-4c4433776a79" (UID: "67146159-618b-4376-89e9-4c4433776a79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:05:59 crc kubenswrapper[4593]: I0129 12:05:59.528495 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67146159-618b-4376-89e9-4c4433776a79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.132735 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vkk" event={"ID":"67146159-618b-4376-89e9-4c4433776a79","Type":"ContainerDied","Data":"423b79897654c7bfeba89f8b2ffde23e4d2402031fa3c58273297441a72736dd"} Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.132887 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vkk" Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.133092 4593 scope.go:117] "RemoveContainer" containerID="fb97502924f771b8811dfeb8fae54dde5dac5f5d5a4c09423646accb2a0f8e52" Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.161347 4593 scope.go:117] "RemoveContainer" containerID="903179b1a5123d41188d675ef19e4b23549a769ed206e5aeb71733e3c6d173cd" Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.171772 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.186325 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k7vkk"] Jan 29 12:06:00 crc kubenswrapper[4593]: I0129 12:06:00.188455 4593 scope.go:117] "RemoveContainer" containerID="8d22093bb0433d57ba4af0c4dc12d757c6b02132977c80845c4c07f793d8a283" Jan 29 12:06:01 crc kubenswrapper[4593]: I0129 12:06:01.090723 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67146159-618b-4376-89e9-4c4433776a79" path="/var/lib/kubelet/pods/67146159-618b-4376-89e9-4c4433776a79/volumes" Jan 29 12:06:12 crc kubenswrapper[4593]: I0129 12:06:12.075010 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:06:12 crc kubenswrapper[4593]: E0129 12:06:12.075962 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:06:27 crc kubenswrapper[4593]: I0129 12:06:27.075369 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:06:27 crc kubenswrapper[4593]: E0129 12:06:27.076733 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:06:39 crc kubenswrapper[4593]: I0129 12:06:39.075407 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:06:39 crc kubenswrapper[4593]: E0129 12:06:39.076227 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:06:51 crc kubenswrapper[4593]: I0129 12:06:51.091523 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:06:51 crc kubenswrapper[4593]: E0129 12:06:51.092913 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:05 crc kubenswrapper[4593]: I0129 12:07:05.110675 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:05 crc kubenswrapper[4593]: E0129 12:07:05.112053 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:16 crc kubenswrapper[4593]: I0129 12:07:16.074578 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:16 crc kubenswrapper[4593]: E0129 12:07:16.075339 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:28 crc kubenswrapper[4593]: I0129 12:07:28.075101 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:28 crc kubenswrapper[4593]: E0129 12:07:28.075853 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:41 crc kubenswrapper[4593]: I0129 12:07:41.075718 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:41 crc kubenswrapper[4593]: E0129 12:07:41.076560 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:07:52 crc kubenswrapper[4593]: I0129 12:07:52.077221 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:07:52 crc kubenswrapper[4593]: E0129 12:07:52.078088 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:08:06 crc kubenswrapper[4593]: I0129 12:08:06.075985 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:08:06 crc kubenswrapper[4593]: I0129 12:08:06.715388 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89"} Jan 29 12:10:33 crc kubenswrapper[4593]: I0129 12:10:33.946151 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:10:33 crc kubenswrapper[4593]: I0129 12:10:33.946869 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:11:03 crc kubenswrapper[4593]: I0129 12:11:03.945883 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:11:03 crc kubenswrapper[4593]: I0129 12:11:03.946455 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.521192 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:19 crc kubenswrapper[4593]: E0129 12:11:19.522315 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="extract-content" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.522349 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="extract-content" Jan 29 12:11:19 crc kubenswrapper[4593]: E0129 12:11:19.522377 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.522388 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" Jan 29 12:11:19 crc kubenswrapper[4593]: E0129 12:11:19.522409 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="extract-utilities" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.522418 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="extract-utilities" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.522692 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="67146159-618b-4376-89e9-4c4433776a79" containerName="registry-server" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.524170 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.545001 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.693231 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.693394 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.693456 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.795679 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.795770 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.795860 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.796318 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.796625 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.825868 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") pod \"certified-operators-4zmb4\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:19 crc kubenswrapper[4593]: I0129 12:11:19.845854 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.455681 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.916012 4593 generic.go:334] "Generic (PLEG): container finished" podID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerID="e974cfd4ba99c10cc2aad6fe3294ee279ef945d78da77b5575efff84d75dc3f5" exitCode=0 Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.916204 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerDied","Data":"e974cfd4ba99c10cc2aad6fe3294ee279ef945d78da77b5575efff84d75dc3f5"} Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.916338 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerStarted","Data":"0f8b5557b97ae87240ce95f6ce1826bf3eddc35e903219d0aa779451e8a2b146"} Jan 29 12:11:20 crc kubenswrapper[4593]: I0129 12:11:20.919398 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:11:22 crc kubenswrapper[4593]: I0129 12:11:22.950691 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerStarted","Data":"9721d75f517671802e10383aaf0d51740b457133fabbb1bb0666df1729b46536"} Jan 29 12:11:26 crc kubenswrapper[4593]: I0129 12:11:26.987416 4593 generic.go:334] "Generic (PLEG): container finished" podID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerID="9721d75f517671802e10383aaf0d51740b457133fabbb1bb0666df1729b46536" exitCode=0 Jan 29 12:11:26 crc kubenswrapper[4593]: I0129 12:11:26.987487 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerDied","Data":"9721d75f517671802e10383aaf0d51740b457133fabbb1bb0666df1729b46536"} Jan 29 12:11:28 crc kubenswrapper[4593]: I0129 12:11:28.001236 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerStarted","Data":"d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a"} Jan 29 12:11:28 crc kubenswrapper[4593]: I0129 12:11:28.023311 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4zmb4" podStartSLOduration=2.553056247 podStartE2EDuration="9.02325509s" podCreationTimestamp="2026-01-29 12:11:19 +0000 UTC" firstStartedPulling="2026-01-29 12:11:20.919063241 +0000 UTC m=+4346.792097432" lastFinishedPulling="2026-01-29 12:11:27.389262084 +0000 UTC m=+4353.262296275" observedRunningTime="2026-01-29 12:11:28.019881249 +0000 UTC m=+4353.892915450" watchObservedRunningTime="2026-01-29 12:11:28.02325509 +0000 UTC m=+4353.896289311" Jan 29 12:11:29 crc kubenswrapper[4593]: I0129 12:11:29.847283 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:29 crc kubenswrapper[4593]: I0129 12:11:29.847683 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:29 crc kubenswrapper[4593]: I0129 12:11:29.900212 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.947045 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.947532 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.947582 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.949874 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:11:33 crc kubenswrapper[4593]: I0129 12:11:33.949959 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89" gracePeriod=600 Jan 29 12:11:35 crc kubenswrapper[4593]: I0129 12:11:35.063857 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89" exitCode=0 Jan 29 12:11:35 crc kubenswrapper[4593]: I0129 12:11:35.064043 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89"} Jan 29 12:11:35 crc kubenswrapper[4593]: I0129 12:11:35.065362 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e"} Jan 29 12:11:35 crc kubenswrapper[4593]: I0129 12:11:35.065462 4593 scope.go:117] "RemoveContainer" containerID="2c03980436f66649dec4fd0ecba0f23fc75892f803373b557afbda47478c9da0" Jan 29 12:11:39 crc kubenswrapper[4593]: I0129 12:11:39.898311 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:39 crc kubenswrapper[4593]: I0129 12:11:39.972107 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:40 crc kubenswrapper[4593]: I0129 12:11:40.121365 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4zmb4" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="registry-server" containerID="cri-o://d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a" gracePeriod=2 Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.137281 4593 generic.go:334] "Generic (PLEG): container finished" podID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerID="d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a" exitCode=0 Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.137379 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerDied","Data":"d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a"} Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.137649 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4zmb4" event={"ID":"61af0d72-8d15-4bf9-90f3-514d5a35adeb","Type":"ContainerDied","Data":"0f8b5557b97ae87240ce95f6ce1826bf3eddc35e903219d0aa779451e8a2b146"} Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.137700 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f8b5557b97ae87240ce95f6ce1826bf3eddc35e903219d0aa779451e8a2b146" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.189551 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.339042 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") pod \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.339186 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") pod \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.339252 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") pod \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\" (UID: \"61af0d72-8d15-4bf9-90f3-514d5a35adeb\") " Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.341556 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities" (OuterVolumeSpecName: "utilities") pod "61af0d72-8d15-4bf9-90f3-514d5a35adeb" (UID: "61af0d72-8d15-4bf9-90f3-514d5a35adeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.391262 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61af0d72-8d15-4bf9-90f3-514d5a35adeb" (UID: "61af0d72-8d15-4bf9-90f3-514d5a35adeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.397605 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7" (OuterVolumeSpecName: "kube-api-access-rcwj7") pod "61af0d72-8d15-4bf9-90f3-514d5a35adeb" (UID: "61af0d72-8d15-4bf9-90f3-514d5a35adeb"). InnerVolumeSpecName "kube-api-access-rcwj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.442116 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.442581 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcwj7\" (UniqueName: \"kubernetes.io/projected/61af0d72-8d15-4bf9-90f3-514d5a35adeb-kube-api-access-rcwj7\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:41 crc kubenswrapper[4593]: I0129 12:11:41.442687 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61af0d72-8d15-4bf9-90f3-514d5a35adeb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:42 crc kubenswrapper[4593]: I0129 12:11:42.147400 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4zmb4" Jan 29 12:11:42 crc kubenswrapper[4593]: I0129 12:11:42.204270 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:42 crc kubenswrapper[4593]: I0129 12:11:42.210020 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4zmb4"] Jan 29 12:11:43 crc kubenswrapper[4593]: I0129 12:11:43.087394 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" path="/var/lib/kubelet/pods/61af0d72-8d15-4bf9-90f3-514d5a35adeb/volumes" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.362557 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:12:53 crc kubenswrapper[4593]: E0129 12:12:53.364069 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="extract-content" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.364091 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="extract-content" Jan 29 12:12:53 crc kubenswrapper[4593]: E0129 12:12:53.364110 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="extract-utilities" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.364122 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="extract-utilities" Jan 29 12:12:53 crc kubenswrapper[4593]: E0129 12:12:53.364159 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="registry-server" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.364172 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="registry-server" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.364498 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="61af0d72-8d15-4bf9-90f3-514d5a35adeb" containerName="registry-server" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.370349 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.413291 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.433704 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.434228 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.434464 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.537210 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.537331 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.537778 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.537802 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.538090 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.562609 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") pod \"redhat-marketplace-98gxd\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:53 crc kubenswrapper[4593]: I0129 12:12:53.743291 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:12:54 crc kubenswrapper[4593]: I0129 12:12:54.319404 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:12:54 crc kubenswrapper[4593]: I0129 12:12:54.805502 4593 generic.go:334] "Generic (PLEG): container finished" podID="8eaac92f-649f-4974-8386-456b6bd43311" containerID="c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24" exitCode=0 Jan 29 12:12:54 crc kubenswrapper[4593]: I0129 12:12:54.809449 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerDied","Data":"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24"} Jan 29 12:12:54 crc kubenswrapper[4593]: I0129 12:12:54.809578 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerStarted","Data":"9ad4c1e630bd2cb149d0ba952ca91f032d6db8c71bb5a35438114e8234485e71"} Jan 29 12:12:55 crc kubenswrapper[4593]: I0129 12:12:55.816936 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerStarted","Data":"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33"} Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.368325 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.370562 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.380298 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.417509 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.417659 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.417774 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.520228 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.520419 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.520459 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.520916 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.521124 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.827036 4593 generic.go:334] "Generic (PLEG): container finished" podID="8eaac92f-649f-4974-8386-456b6bd43311" containerID="c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33" exitCode=0 Jan 29 12:12:56 crc kubenswrapper[4593]: I0129 12:12:56.827096 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerDied","Data":"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33"} Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.068716 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") pod \"community-operators-fvf74\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.312969 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.807473 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.838544 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerStarted","Data":"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2"} Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.843131 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerStarted","Data":"d1eb148f0820d4908158e1d29cd56e7eb7cb9dbbe8b7a6b3f032a7bdbf59b266"} Jan 29 12:12:57 crc kubenswrapper[4593]: I0129 12:12:57.866554 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98gxd" podStartSLOduration=2.290520244 podStartE2EDuration="4.866534985s" podCreationTimestamp="2026-01-29 12:12:53 +0000 UTC" firstStartedPulling="2026-01-29 12:12:54.80897133 +0000 UTC m=+4440.682005521" lastFinishedPulling="2026-01-29 12:12:57.384986071 +0000 UTC m=+4443.258020262" observedRunningTime="2026-01-29 12:12:57.863243046 +0000 UTC m=+4443.736277257" watchObservedRunningTime="2026-01-29 12:12:57.866534985 +0000 UTC m=+4443.739569176" Jan 29 12:12:58 crc kubenswrapper[4593]: I0129 12:12:58.853051 4593 generic.go:334] "Generic (PLEG): container finished" podID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerID="7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae" exitCode=0 Jan 29 12:12:58 crc kubenswrapper[4593]: I0129 12:12:58.853160 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerDied","Data":"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae"} Jan 29 12:12:59 crc kubenswrapper[4593]: I0129 12:12:59.865391 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerStarted","Data":"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d"} Jan 29 12:13:01 crc kubenswrapper[4593]: I0129 12:13:01.882562 4593 generic.go:334] "Generic (PLEG): container finished" podID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerID="dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d" exitCode=0 Jan 29 12:13:01 crc kubenswrapper[4593]: I0129 12:13:01.882653 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerDied","Data":"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d"} Jan 29 12:13:02 crc kubenswrapper[4593]: I0129 12:13:02.893686 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerStarted","Data":"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58"} Jan 29 12:13:02 crc kubenswrapper[4593]: I0129 12:13:02.918268 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fvf74" podStartSLOduration=3.340558313 podStartE2EDuration="6.918244525s" podCreationTimestamp="2026-01-29 12:12:56 +0000 UTC" firstStartedPulling="2026-01-29 12:12:58.854856007 +0000 UTC m=+4444.727890208" lastFinishedPulling="2026-01-29 12:13:02.432542219 +0000 UTC m=+4448.305576420" observedRunningTime="2026-01-29 12:13:02.911219495 +0000 UTC m=+4448.784253686" watchObservedRunningTime="2026-01-29 12:13:02.918244525 +0000 UTC m=+4448.791278716" Jan 29 12:13:03 crc kubenswrapper[4593]: I0129 12:13:03.744544 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:03 crc kubenswrapper[4593]: I0129 12:13:03.744588 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:03 crc kubenswrapper[4593]: I0129 12:13:03.801584 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:03 crc kubenswrapper[4593]: I0129 12:13:03.965998 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:05 crc kubenswrapper[4593]: I0129 12:13:05.131056 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:13:05 crc kubenswrapper[4593]: I0129 12:13:05.924874 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98gxd" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="registry-server" containerID="cri-o://2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" gracePeriod=2 Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.451259 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.522546 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") pod \"8eaac92f-649f-4974-8386-456b6bd43311\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.522868 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") pod \"8eaac92f-649f-4974-8386-456b6bd43311\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.522944 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") pod \"8eaac92f-649f-4974-8386-456b6bd43311\" (UID: \"8eaac92f-649f-4974-8386-456b6bd43311\") " Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.524236 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities" (OuterVolumeSpecName: "utilities") pod "8eaac92f-649f-4974-8386-456b6bd43311" (UID: "8eaac92f-649f-4974-8386-456b6bd43311"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.530220 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n" (OuterVolumeSpecName: "kube-api-access-f997n") pod "8eaac92f-649f-4974-8386-456b6bd43311" (UID: "8eaac92f-649f-4974-8386-456b6bd43311"). InnerVolumeSpecName "kube-api-access-f997n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.566739 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eaac92f-649f-4974-8386-456b6bd43311" (UID: "8eaac92f-649f-4974-8386-456b6bd43311"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.625147 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.625179 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eaac92f-649f-4974-8386-456b6bd43311-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.625191 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f997n\" (UniqueName: \"kubernetes.io/projected/8eaac92f-649f-4974-8386-456b6bd43311-kube-api-access-f997n\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.937837 4593 generic.go:334] "Generic (PLEG): container finished" podID="8eaac92f-649f-4974-8386-456b6bd43311" containerID="2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" exitCode=0 Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.937904 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98gxd" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.937921 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerDied","Data":"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2"} Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.938085 4593 scope.go:117] "RemoveContainer" containerID="2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.938272 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98gxd" event={"ID":"8eaac92f-649f-4974-8386-456b6bd43311","Type":"ContainerDied","Data":"9ad4c1e630bd2cb149d0ba952ca91f032d6db8c71bb5a35438114e8234485e71"} Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.967990 4593 scope.go:117] "RemoveContainer" containerID="c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33" Jan 29 12:13:06 crc kubenswrapper[4593]: I0129 12:13:06.996329 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.007289 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98gxd"] Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.018831 4593 scope.go:117] "RemoveContainer" containerID="c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.057459 4593 scope.go:117] "RemoveContainer" containerID="2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" Jan 29 12:13:07 crc kubenswrapper[4593]: E0129 12:13:07.058191 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2\": container with ID starting with 2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2 not found: ID does not exist" containerID="2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.058236 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2"} err="failed to get container status \"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2\": rpc error: code = NotFound desc = could not find container \"2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2\": container with ID starting with 2ea111a54bbbdf87d668706835a05d7ec48cd68970c1cfb770cb5ccbc940f9f2 not found: ID does not exist" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.058263 4593 scope.go:117] "RemoveContainer" containerID="c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33" Jan 29 12:13:07 crc kubenswrapper[4593]: E0129 12:13:07.058591 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33\": container with ID starting with c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33 not found: ID does not exist" containerID="c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.058621 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33"} err="failed to get container status \"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33\": rpc error: code = NotFound desc = could not find container \"c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33\": container with ID starting with c63281faa233979d4428e6704008c949b1b8e1f15d90274dc988641299acee33 not found: ID does not exist" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.058654 4593 scope.go:117] "RemoveContainer" containerID="c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24" Jan 29 12:13:07 crc kubenswrapper[4593]: E0129 12:13:07.058969 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24\": container with ID starting with c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24 not found: ID does not exist" containerID="c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.059141 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24"} err="failed to get container status \"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24\": rpc error: code = NotFound desc = could not find container \"c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24\": container with ID starting with c3b74104dc93826cfe06392e55e7e7f73d8560c64b8c4ca083369d0e06d09e24 not found: ID does not exist" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.095409 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eaac92f-649f-4974-8386-456b6bd43311" path="/var/lib/kubelet/pods/8eaac92f-649f-4974-8386-456b6bd43311/volumes" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.313784 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.314339 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:07 crc kubenswrapper[4593]: I0129 12:13:07.808412 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:08 crc kubenswrapper[4593]: I0129 12:13:08.004205 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:09 crc kubenswrapper[4593]: I0129 12:13:09.528999 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:13:09 crc kubenswrapper[4593]: I0129 12:13:09.969974 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fvf74" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="registry-server" containerID="cri-o://17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" gracePeriod=2 Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.416086 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.606893 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") pod \"b0685d5b-09d9-4cb1-86d0-89f46550f541\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.606986 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") pod \"b0685d5b-09d9-4cb1-86d0-89f46550f541\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.607189 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") pod \"b0685d5b-09d9-4cb1-86d0-89f46550f541\" (UID: \"b0685d5b-09d9-4cb1-86d0-89f46550f541\") " Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.608660 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities" (OuterVolumeSpecName: "utilities") pod "b0685d5b-09d9-4cb1-86d0-89f46550f541" (UID: "b0685d5b-09d9-4cb1-86d0-89f46550f541"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.612776 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g" (OuterVolumeSpecName: "kube-api-access-x5m2g") pod "b0685d5b-09d9-4cb1-86d0-89f46550f541" (UID: "b0685d5b-09d9-4cb1-86d0-89f46550f541"). InnerVolumeSpecName "kube-api-access-x5m2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.709850 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5m2g\" (UniqueName: \"kubernetes.io/projected/b0685d5b-09d9-4cb1-86d0-89f46550f541-kube-api-access-x5m2g\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.709883 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.981999 4593 generic.go:334] "Generic (PLEG): container finished" podID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerID="17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" exitCode=0 Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.982066 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerDied","Data":"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58"} Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.982113 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvf74" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.982139 4593 scope.go:117] "RemoveContainer" containerID="17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" Jan 29 12:13:10 crc kubenswrapper[4593]: I0129 12:13:10.982119 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvf74" event={"ID":"b0685d5b-09d9-4cb1-86d0-89f46550f541","Type":"ContainerDied","Data":"d1eb148f0820d4908158e1d29cd56e7eb7cb9dbbe8b7a6b3f032a7bdbf59b266"} Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.006943 4593 scope.go:117] "RemoveContainer" containerID="dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.036601 4593 scope.go:117] "RemoveContainer" containerID="7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.095151 4593 scope.go:117] "RemoveContainer" containerID="17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" Jan 29 12:13:11 crc kubenswrapper[4593]: E0129 12:13:11.096011 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58\": container with ID starting with 17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58 not found: ID does not exist" containerID="17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.096050 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58"} err="failed to get container status \"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58\": rpc error: code = NotFound desc = could not find container \"17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58\": container with ID starting with 17951efd32a8173b9b72530e3dcb68b000d7c6c8c8243276db5d49980e385a58 not found: ID does not exist" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.096083 4593 scope.go:117] "RemoveContainer" containerID="dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d" Jan 29 12:13:11 crc kubenswrapper[4593]: E0129 12:13:11.096753 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d\": container with ID starting with dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d not found: ID does not exist" containerID="dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.096779 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d"} err="failed to get container status \"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d\": rpc error: code = NotFound desc = could not find container \"dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d\": container with ID starting with dc808ebf7871452c23a3c7c7c810cf08c86316aedfb66cb866baddf8bdf8102d not found: ID does not exist" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.096795 4593 scope.go:117] "RemoveContainer" containerID="7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae" Jan 29 12:13:11 crc kubenswrapper[4593]: E0129 12:13:11.097114 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae\": container with ID starting with 7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae not found: ID does not exist" containerID="7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.097140 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae"} err="failed to get container status \"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae\": rpc error: code = NotFound desc = could not find container \"7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae\": container with ID starting with 7225165f8868f0f3ba875fe9ca902a424a8636587d164d157b110b59c672bfae not found: ID does not exist" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.216584 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0685d5b-09d9-4cb1-86d0-89f46550f541" (UID: "b0685d5b-09d9-4cb1-86d0-89f46550f541"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.220588 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0685d5b-09d9-4cb1-86d0-89f46550f541-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.317832 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:13:11 crc kubenswrapper[4593]: I0129 12:13:11.325251 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fvf74"] Jan 29 12:13:13 crc kubenswrapper[4593]: I0129 12:13:13.094930 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" path="/var/lib/kubelet/pods/b0685d5b-09d9-4cb1-86d0-89f46550f541/volumes" Jan 29 12:13:19 crc kubenswrapper[4593]: I0129 12:13:19.057005 4593 generic.go:334] "Generic (PLEG): container finished" podID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" containerID="f1bbc49dcc0cd36e38a7fd4617bfb0fd01fe811e0e734a91b4f25ae6b23bbeaf" exitCode=0 Jan 29 12:13:19 crc kubenswrapper[4593]: I0129 12:13:19.057072 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5ea9892-a149-4cfe-bb9c-ef636eacd125","Type":"ContainerDied","Data":"f1bbc49dcc0cd36e38a7fd4617bfb0fd01fe811e0e734a91b4f25ae6b23bbeaf"} Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.450297 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.600944 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601016 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601141 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601161 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601177 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601207 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601236 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601331 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.601361 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") pod \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\" (UID: \"d5ea9892-a149-4cfe-bb9c-ef636eacd125\") " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.607140 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data" (OuterVolumeSpecName: "config-data") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.607363 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.607380 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.609247 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.628822 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc" (OuterVolumeSpecName: "kube-api-access-bs2hc") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "kube-api-access-bs2hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.653055 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.656179 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.664389 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.677744 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d5ea9892-a149-4cfe-bb9c-ef636eacd125" (UID: "d5ea9892-a149-4cfe-bb9c-ef636eacd125"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.703410 4593 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.703454 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs2hc\" (UniqueName: \"kubernetes.io/projected/d5ea9892-a149-4cfe-bb9c-ef636eacd125-kube-api-access-bs2hc\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704566 4593 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704591 4593 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704604 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704617 4593 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704628 4593 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5ea9892-a149-4cfe-bb9c-ef636eacd125-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704658 4593 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d5ea9892-a149-4cfe-bb9c-ef636eacd125-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.704671 4593 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5ea9892-a149-4cfe-bb9c-ef636eacd125-config-data\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.730222 4593 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 29 12:13:20 crc kubenswrapper[4593]: I0129 12:13:20.808622 4593 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 29 12:13:21 crc kubenswrapper[4593]: I0129 12:13:21.079087 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 29 12:13:21 crc kubenswrapper[4593]: I0129 12:13:21.086364 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d5ea9892-a149-4cfe-bb9c-ef636eacd125","Type":"ContainerDied","Data":"bf88caa96b3fd17945a137b250bf9d7f8872b0e8469ad3aa1ab198d63888646d"} Jan 29 12:13:21 crc kubenswrapper[4593]: I0129 12:13:21.086407 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf88caa96b3fd17945a137b250bf9d7f8872b0e8469ad3aa1ab198d63888646d" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.326558 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327712 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="extract-utilities" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327733 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="extract-utilities" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327775 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="extract-utilities" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327784 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="extract-utilities" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327793 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327801 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327827 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="extract-content" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327833 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="extract-content" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327848 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="extract-content" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327855 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="extract-content" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327868 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" containerName="tempest-tests-tempest-tests-runner" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327876 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" containerName="tempest-tests-tempest-tests-runner" Jan 29 12:13:31 crc kubenswrapper[4593]: E0129 12:13:31.327891 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.327897 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.328104 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0685d5b-09d9-4cb1-86d0-89f46550f541" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.328132 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ea9892-a149-4cfe-bb9c-ef636eacd125" containerName="tempest-tests-tempest-tests-runner" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.328150 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eaac92f-649f-4974-8386-456b6bd43311" containerName="registry-server" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.328969 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.331760 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-vt7mb" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.337512 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.441241 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.441371 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrlg\" (UniqueName: \"kubernetes.io/projected/be3a2ae9-6f0e-459e-bd91-10a92871767c-kube-api-access-xbrlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.542913 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbrlg\" (UniqueName: \"kubernetes.io/projected/be3a2ae9-6f0e-459e-bd91-10a92871767c-kube-api-access-xbrlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.543112 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.544576 4593 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.575522 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbrlg\" (UniqueName: \"kubernetes.io/projected/be3a2ae9-6f0e-459e-bd91-10a92871767c-kube-api-access-xbrlg\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.595296 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be3a2ae9-6f0e-459e-bd91-10a92871767c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:31 crc kubenswrapper[4593]: I0129 12:13:31.667103 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 29 12:13:32 crc kubenswrapper[4593]: I0129 12:13:32.150483 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 29 12:13:32 crc kubenswrapper[4593]: I0129 12:13:32.199873 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be3a2ae9-6f0e-459e-bd91-10a92871767c","Type":"ContainerStarted","Data":"a6f153ce8021cd387a610c92bda1b1f2f68e2eea007e984dd04fdffc30f42452"} Jan 29 12:13:34 crc kubenswrapper[4593]: I0129 12:13:34.218860 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be3a2ae9-6f0e-459e-bd91-10a92871767c","Type":"ContainerStarted","Data":"2381ee7cacc824d7c3622424877525831427de11d4cc37fe4c948c4fe154e84a"} Jan 29 12:13:34 crc kubenswrapper[4593]: I0129 12:13:34.237346 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.66563876 podStartE2EDuration="3.237327275s" podCreationTimestamp="2026-01-29 12:13:31 +0000 UTC" firstStartedPulling="2026-01-29 12:13:32.171821943 +0000 UTC m=+4478.044856134" lastFinishedPulling="2026-01-29 12:13:33.743510458 +0000 UTC m=+4479.616544649" observedRunningTime="2026-01-29 12:13:34.234723064 +0000 UTC m=+4480.107757275" watchObservedRunningTime="2026-01-29 12:13:34.237327275 +0000 UTC m=+4480.110361466" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.845598 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.848720 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.851485 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zc4pg"/"default-dockercfg-zg6z9" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.851761 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zc4pg"/"kube-root-ca.crt" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.851969 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zc4pg"/"openshift-service-ca.crt" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.917785 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.944683 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:57 crc kubenswrapper[4593]: I0129 12:13:57.944778 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.046923 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.047107 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.047663 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.069322 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") pod \"must-gather-htdlp\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.168440 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:13:58 crc kubenswrapper[4593]: I0129 12:13:58.666919 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:13:59 crc kubenswrapper[4593]: I0129 12:13:59.466967 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/must-gather-htdlp" event={"ID":"006cda43-0b58-4970-bcf0-c355509620f8","Type":"ContainerStarted","Data":"86239900d1d38bd4a5bf781851c2ddc657ff989932d54c44e7e343fa9cb35945"} Jan 29 12:14:03 crc kubenswrapper[4593]: I0129 12:14:03.946850 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:14:03 crc kubenswrapper[4593]: I0129 12:14:03.947627 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:14:07 crc kubenswrapper[4593]: I0129 12:14:07.571094 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/must-gather-htdlp" event={"ID":"006cda43-0b58-4970-bcf0-c355509620f8","Type":"ContainerStarted","Data":"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5"} Jan 29 12:14:07 crc kubenswrapper[4593]: I0129 12:14:07.571627 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/must-gather-htdlp" event={"ID":"006cda43-0b58-4970-bcf0-c355509620f8","Type":"ContainerStarted","Data":"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76"} Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.615888 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zc4pg/must-gather-htdlp" podStartSLOduration=8.632334522 podStartE2EDuration="16.615864197s" podCreationTimestamp="2026-01-29 12:13:57 +0000 UTC" firstStartedPulling="2026-01-29 12:13:58.676059883 +0000 UTC m=+4504.549094084" lastFinishedPulling="2026-01-29 12:14:06.659589568 +0000 UTC m=+4512.532623759" observedRunningTime="2026-01-29 12:14:07.594173972 +0000 UTC m=+4513.467208173" watchObservedRunningTime="2026-01-29 12:14:13.615864197 +0000 UTC m=+4519.488898388" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.625170 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-46zhj"] Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.626358 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.653236 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.653732 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.755140 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.755299 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.755413 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.781310 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") pod \"crc-debug-46zhj\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:13 crc kubenswrapper[4593]: I0129 12:14:13.943112 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:14:14 crc kubenswrapper[4593]: I0129 12:14:14.653681 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" event={"ID":"4b73d5b9-a18b-4213-836b-d326b2998b3b","Type":"ContainerStarted","Data":"9490ccfec3ec0d0a7eb16cfabfbf39ebc9c56a9cfb6e795dd876b4c0791d8c44"} Jan 29 12:14:28 crc kubenswrapper[4593]: I0129 12:14:28.940802 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" event={"ID":"4b73d5b9-a18b-4213-836b-d326b2998b3b","Type":"ContainerStarted","Data":"71a0e35a9b97791cdb2e7a3a0e49f82c96b3918bca79faeaea9323664e2cf8c6"} Jan 29 12:14:28 crc kubenswrapper[4593]: I0129 12:14:28.967466 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" podStartSLOduration=2.214317328 podStartE2EDuration="15.967413722s" podCreationTimestamp="2026-01-29 12:14:13 +0000 UTC" firstStartedPulling="2026-01-29 12:14:14.006925169 +0000 UTC m=+4519.879959360" lastFinishedPulling="2026-01-29 12:14:27.760021563 +0000 UTC m=+4533.633055754" observedRunningTime="2026-01-29 12:14:28.956532598 +0000 UTC m=+4534.829566789" watchObservedRunningTime="2026-01-29 12:14:28.967413722 +0000 UTC m=+4534.840447913" Jan 29 12:14:33 crc kubenswrapper[4593]: I0129 12:14:33.952287 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:14:33 crc kubenswrapper[4593]: I0129 12:14:33.952981 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.180410 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6"] Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.183317 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.186991 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.189461 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.207958 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6"] Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.275810 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.275992 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.276026 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.377648 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.377706 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.377803 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.379003 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.397007 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.419481 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") pod \"collect-profiles-29494815-ndsr6\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:00 crc kubenswrapper[4593]: I0129 12:15:00.512027 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:01 crc kubenswrapper[4593]: I0129 12:15:01.069047 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6"] Jan 29 12:15:02 crc kubenswrapper[4593]: I0129 12:15:02.388590 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" event={"ID":"cdd89dc3-5db6-4bc0-88c1-472488589100","Type":"ContainerStarted","Data":"f3a2960ccf5dd7cb1b20ed12f992a709cf119e020342cf8773f91b5fa318e059"} Jan 29 12:15:02 crc kubenswrapper[4593]: I0129 12:15:02.389152 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" event={"ID":"cdd89dc3-5db6-4bc0-88c1-472488589100","Type":"ContainerStarted","Data":"9fe82d1ffb28043d4ade6eac624b53d781d115801dbf977a3a6388e0494c2202"} Jan 29 12:15:02 crc kubenswrapper[4593]: I0129 12:15:02.422706 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" podStartSLOduration=2.422669327 podStartE2EDuration="2.422669327s" podCreationTimestamp="2026-01-29 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:15:02.41395427 +0000 UTC m=+4568.286988501" watchObservedRunningTime="2026-01-29 12:15:02.422669327 +0000 UTC m=+4568.295703518" Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.400345 4593 generic.go:334] "Generic (PLEG): container finished" podID="cdd89dc3-5db6-4bc0-88c1-472488589100" containerID="f3a2960ccf5dd7cb1b20ed12f992a709cf119e020342cf8773f91b5fa318e059" exitCode=0 Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.400604 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" event={"ID":"cdd89dc3-5db6-4bc0-88c1-472488589100","Type":"ContainerDied","Data":"f3a2960ccf5dd7cb1b20ed12f992a709cf119e020342cf8773f91b5fa318e059"} Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.947048 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.947158 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.947208 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.948004 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:15:03 crc kubenswrapper[4593]: I0129 12:15:03.948087 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" gracePeriod=600 Jan 29 12:15:04 crc kubenswrapper[4593]: E0129 12:15:04.086467 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.410338 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" exitCode=0 Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.410406 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e"} Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.410476 4593 scope.go:117] "RemoveContainer" containerID="e17e203ea610856274105cc5fc7a47b3a11ad9dc0a91cefedfbfe32379366f89" Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.411193 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:04 crc kubenswrapper[4593]: E0129 12:15:04.411517 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.893514 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.998341 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") pod \"cdd89dc3-5db6-4bc0-88c1-472488589100\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.998467 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") pod \"cdd89dc3-5db6-4bc0-88c1-472488589100\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.998522 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") pod \"cdd89dc3-5db6-4bc0-88c1-472488589100\" (UID: \"cdd89dc3-5db6-4bc0-88c1-472488589100\") " Jan 29 12:15:04 crc kubenswrapper[4593]: I0129 12:15:04.999267 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume" (OuterVolumeSpecName: "config-volume") pod "cdd89dc3-5db6-4bc0-88c1-472488589100" (UID: "cdd89dc3-5db6-4bc0-88c1-472488589100"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.012264 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cdd89dc3-5db6-4bc0-88c1-472488589100" (UID: "cdd89dc3-5db6-4bc0-88c1-472488589100"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.019800 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv" (OuterVolumeSpecName: "kube-api-access-pwsqv") pod "cdd89dc3-5db6-4bc0-88c1-472488589100" (UID: "cdd89dc3-5db6-4bc0-88c1-472488589100"). InnerVolumeSpecName "kube-api-access-pwsqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.101183 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd89dc3-5db6-4bc0-88c1-472488589100-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.101231 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwsqv\" (UniqueName: \"kubernetes.io/projected/cdd89dc3-5db6-4bc0-88c1-472488589100-kube-api-access-pwsqv\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.101246 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cdd89dc3-5db6-4bc0-88c1-472488589100-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.430437 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" event={"ID":"cdd89dc3-5db6-4bc0-88c1-472488589100","Type":"ContainerDied","Data":"9fe82d1ffb28043d4ade6eac624b53d781d115801dbf977a3a6388e0494c2202"} Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.430856 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fe82d1ffb28043d4ade6eac624b53d781d115801dbf977a3a6388e0494c2202" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.430962 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-ndsr6" Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.505698 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 12:15:05 crc kubenswrapper[4593]: I0129 12:15:05.519424 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494770-zf92j"] Jan 29 12:15:07 crc kubenswrapper[4593]: I0129 12:15:07.090894 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3bb310-71b1-4d29-a302-e06181c04f5f" path="/var/lib/kubelet/pods/fe3bb310-71b1-4d29-a302-e06181c04f5f/volumes" Jan 29 12:15:16 crc kubenswrapper[4593]: I0129 12:15:16.074611 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:16 crc kubenswrapper[4593]: E0129 12:15:16.075481 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:16 crc kubenswrapper[4593]: I0129 12:15:16.984220 4593 scope.go:117] "RemoveContainer" containerID="f5dc8ed87db86aba663f3bdc857a868a9a85bafb38e9e0269844cbb77f36242a" Jan 29 12:15:26 crc kubenswrapper[4593]: I0129 12:15:26.665521 4593 generic.go:334] "Generic (PLEG): container finished" podID="4b73d5b9-a18b-4213-836b-d326b2998b3b" containerID="71a0e35a9b97791cdb2e7a3a0e49f82c96b3918bca79faeaea9323664e2cf8c6" exitCode=0 Jan 29 12:15:26 crc kubenswrapper[4593]: I0129 12:15:26.665611 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" event={"ID":"4b73d5b9-a18b-4213-836b-d326b2998b3b","Type":"ContainerDied","Data":"71a0e35a9b97791cdb2e7a3a0e49f82c96b3918bca79faeaea9323664e2cf8c6"} Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.077200 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:27 crc kubenswrapper[4593]: E0129 12:15:27.078952 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.800200 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.838995 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-46zhj"] Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.849613 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-46zhj"] Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.864565 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") pod \"4b73d5b9-a18b-4213-836b-d326b2998b3b\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.865063 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") pod \"4b73d5b9-a18b-4213-836b-d326b2998b3b\" (UID: \"4b73d5b9-a18b-4213-836b-d326b2998b3b\") " Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.865228 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host" (OuterVolumeSpecName: "host") pod "4b73d5b9-a18b-4213-836b-d326b2998b3b" (UID: "4b73d5b9-a18b-4213-836b-d326b2998b3b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.878611 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5" (OuterVolumeSpecName: "kube-api-access-npng5") pod "4b73d5b9-a18b-4213-836b-d326b2998b3b" (UID: "4b73d5b9-a18b-4213-836b-d326b2998b3b"). InnerVolumeSpecName "kube-api-access-npng5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.968153 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npng5\" (UniqueName: \"kubernetes.io/projected/4b73d5b9-a18b-4213-836b-d326b2998b3b-kube-api-access-npng5\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:27 crc kubenswrapper[4593]: I0129 12:15:27.968200 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4b73d5b9-a18b-4213-836b-d326b2998b3b-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:28 crc kubenswrapper[4593]: I0129 12:15:28.686392 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9490ccfec3ec0d0a7eb16cfabfbf39ebc9c56a9cfb6e795dd876b4c0791d8c44" Jan 29 12:15:28 crc kubenswrapper[4593]: I0129 12:15:28.686505 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-46zhj" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.068274 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-jj248"] Jan 29 12:15:29 crc kubenswrapper[4593]: E0129 12:15:29.068981 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd89dc3-5db6-4bc0-88c1-472488589100" containerName="collect-profiles" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069005 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd89dc3-5db6-4bc0-88c1-472488589100" containerName="collect-profiles" Jan 29 12:15:29 crc kubenswrapper[4593]: E0129 12:15:29.069029 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b73d5b9-a18b-4213-836b-d326b2998b3b" containerName="container-00" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069035 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b73d5b9-a18b-4213-836b-d326b2998b3b" containerName="container-00" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069269 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd89dc3-5db6-4bc0-88c1-472488589100" containerName="collect-profiles" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069289 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b73d5b9-a18b-4213-836b-d326b2998b3b" containerName="container-00" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.069952 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.087155 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.087292 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.089399 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b73d5b9-a18b-4213-836b-d326b2998b3b" path="/var/lib/kubelet/pods/4b73d5b9-a18b-4213-836b-d326b2998b3b/volumes" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.188835 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.189006 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.189130 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.207457 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") pod \"crc-debug-jj248\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.389140 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:29 crc kubenswrapper[4593]: W0129 12:15:29.444432 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aebf42b_1daf_48f3_bf18_8ee07cd74ee2.slice/crio-ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c WatchSource:0}: Error finding container ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c: Status 404 returned error can't find the container with id ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.696522 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-jj248" event={"ID":"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2","Type":"ContainerStarted","Data":"54ccd1935e3e2e3e59738afad3c9d5c99134092f1b5fc8efa7667569d5fe3894"} Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.696869 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-jj248" event={"ID":"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2","Type":"ContainerStarted","Data":"ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c"} Jan 29 12:15:29 crc kubenswrapper[4593]: I0129 12:15:29.715301 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zc4pg/crc-debug-jj248" podStartSLOduration=0.715265736 podStartE2EDuration="715.265736ms" podCreationTimestamp="2026-01-29 12:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:15:29.706257082 +0000 UTC m=+4595.579291273" watchObservedRunningTime="2026-01-29 12:15:29.715265736 +0000 UTC m=+4595.588299917" Jan 29 12:15:30 crc kubenswrapper[4593]: I0129 12:15:30.729088 4593 generic.go:334] "Generic (PLEG): container finished" podID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" containerID="54ccd1935e3e2e3e59738afad3c9d5c99134092f1b5fc8efa7667569d5fe3894" exitCode=0 Jan 29 12:15:30 crc kubenswrapper[4593]: I0129 12:15:30.729544 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-jj248" event={"ID":"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2","Type":"ContainerDied","Data":"54ccd1935e3e2e3e59738afad3c9d5c99134092f1b5fc8efa7667569d5fe3894"} Jan 29 12:15:31 crc kubenswrapper[4593]: I0129 12:15:31.838858 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:31 crc kubenswrapper[4593]: I0129 12:15:31.907764 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-jj248"] Jan 29 12:15:31 crc kubenswrapper[4593]: I0129 12:15:31.917389 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-jj248"] Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.038871 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") pod \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.039765 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") pod \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\" (UID: \"3aebf42b-1daf-48f3-bf18-8ee07cd74ee2\") " Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.039903 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host" (OuterVolumeSpecName: "host") pod "3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" (UID: "3aebf42b-1daf-48f3-bf18-8ee07cd74ee2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.040333 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.044613 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz" (OuterVolumeSpecName: "kube-api-access-nlcwz") pod "3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" (UID: "3aebf42b-1daf-48f3-bf18-8ee07cd74ee2"). InnerVolumeSpecName "kube-api-access-nlcwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.142528 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlcwz\" (UniqueName: \"kubernetes.io/projected/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2-kube-api-access-nlcwz\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.749940 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4c5704849efb98684053fc3de8c53fa835ab1abd79c597ee5214e58c54d06c" Jan 29 12:15:32 crc kubenswrapper[4593]: I0129 12:15:32.749994 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-jj248" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.085725 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" path="/var/lib/kubelet/pods/3aebf42b-1daf-48f3-bf18-8ee07cd74ee2/volumes" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.088185 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-zxrz4"] Jan 29 12:15:33 crc kubenswrapper[4593]: E0129 12:15:33.088704 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" containerName="container-00" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.088728 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" containerName="container-00" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.088981 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aebf42b-1daf-48f3-bf18-8ee07cd74ee2" containerName="container-00" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.090240 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.261992 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.262330 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.363619 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.363718 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.363875 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.467374 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") pod \"crc-debug-zxrz4\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.708461 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:33 crc kubenswrapper[4593]: W0129 12:15:33.750311 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc41e742a_4985_4b87_8a5b_6a7586971569.slice/crio-fe297e3e8b6a77806b4c620120f05f1affe9bb6665c7269005e5ecdb51b09f39 WatchSource:0}: Error finding container fe297e3e8b6a77806b4c620120f05f1affe9bb6665c7269005e5ecdb51b09f39: Status 404 returned error can't find the container with id fe297e3e8b6a77806b4c620120f05f1affe9bb6665c7269005e5ecdb51b09f39 Jan 29 12:15:33 crc kubenswrapper[4593]: I0129 12:15:33.760388 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" event={"ID":"c41e742a-4985-4b87-8a5b-6a7586971569","Type":"ContainerStarted","Data":"fe297e3e8b6a77806b4c620120f05f1affe9bb6665c7269005e5ecdb51b09f39"} Jan 29 12:15:34 crc kubenswrapper[4593]: I0129 12:15:34.789142 4593 generic.go:334] "Generic (PLEG): container finished" podID="c41e742a-4985-4b87-8a5b-6a7586971569" containerID="612c74d7772bc16c58093a75fde2a808f49eb1d7c158d2965d447d9b9b7cb962" exitCode=0 Jan 29 12:15:34 crc kubenswrapper[4593]: I0129 12:15:34.789771 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" event={"ID":"c41e742a-4985-4b87-8a5b-6a7586971569","Type":"ContainerDied","Data":"612c74d7772bc16c58093a75fde2a808f49eb1d7c158d2965d447d9b9b7cb962"} Jan 29 12:15:34 crc kubenswrapper[4593]: I0129 12:15:34.852214 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-zxrz4"] Jan 29 12:15:34 crc kubenswrapper[4593]: I0129 12:15:34.864047 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zc4pg/crc-debug-zxrz4"] Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.106545 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.132009 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") pod \"c41e742a-4985-4b87-8a5b-6a7586971569\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.132126 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") pod \"c41e742a-4985-4b87-8a5b-6a7586971569\" (UID: \"c41e742a-4985-4b87-8a5b-6a7586971569\") " Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.132271 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host" (OuterVolumeSpecName: "host") pod "c41e742a-4985-4b87-8a5b-6a7586971569" (UID: "c41e742a-4985-4b87-8a5b-6a7586971569"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.133099 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c41e742a-4985-4b87-8a5b-6a7586971569-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.139153 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th" (OuterVolumeSpecName: "kube-api-access-ch9th") pod "c41e742a-4985-4b87-8a5b-6a7586971569" (UID: "c41e742a-4985-4b87-8a5b-6a7586971569"). InnerVolumeSpecName "kube-api-access-ch9th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.234743 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch9th\" (UniqueName: \"kubernetes.io/projected/c41e742a-4985-4b87-8a5b-6a7586971569-kube-api-access-ch9th\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.825626 4593 scope.go:117] "RemoveContainer" containerID="612c74d7772bc16c58093a75fde2a808f49eb1d7c158d2965d447d9b9b7cb962" Jan 29 12:15:36 crc kubenswrapper[4593]: I0129 12:15:36.825760 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/crc-debug-zxrz4" Jan 29 12:15:37 crc kubenswrapper[4593]: I0129 12:15:37.086827 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41e742a-4985-4b87-8a5b-6a7586971569" path="/var/lib/kubelet/pods/c41e742a-4985-4b87-8a5b-6a7586971569/volumes" Jan 29 12:15:38 crc kubenswrapper[4593]: I0129 12:15:38.074972 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:38 crc kubenswrapper[4593]: E0129 12:15:38.075587 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:15:53 crc kubenswrapper[4593]: I0129 12:15:53.075308 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:15:53 crc kubenswrapper[4593]: E0129 12:15:53.076478 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:02 crc kubenswrapper[4593]: I0129 12:16:02.730336 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59844fc4b6-zctck_07d138d8-a5fa-4b77-80e5-924dba8de4c0/barbican-api/0.log" Jan 29 12:16:02 crc kubenswrapper[4593]: I0129 12:16:02.954178 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59844fc4b6-zctck_07d138d8-a5fa-4b77-80e5-924dba8de4c0/barbican-api-log/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.001678 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cf8bfd486-7dlhx_5f3c398f-928a-4f7e-9e76-6978b8a3673e/barbican-keystone-listener/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.154520 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cf8bfd486-7dlhx_5f3c398f-928a-4f7e-9e76-6978b8a3673e/barbican-keystone-listener-log/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.221263 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5947965cdc-wl48v_564d3b50-7cec-4913-bac8-64af532aa32f/barbican-worker/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.352487 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5947965cdc-wl48v_564d3b50-7cec-4913-bac8-64af532aa32f/barbican-worker-log/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.502736 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz_e4241343-d4f5-4690-972e-55f054a93f30/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.698585 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/ceilometer-central-agent/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.734641 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/proxy-httpd/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.750684 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/ceilometer-notification-agent/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.807342 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/sg-core/0.log" Jan 29 12:16:03 crc kubenswrapper[4593]: I0129 12:16:03.981076 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7ea14af-5b7c-44d6-a34c-1a344bfc96ef/cinder-api-log/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.062838 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7ea14af-5b7c-44d6-a34c-1a344bfc96ef/cinder-api/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.238995 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5516e5e9-a6e4-4877-bd34-af4128cc7e33/probe/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.365334 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5516e5e9-a6e4-4877-bd34-af4128cc7e33/cinder-scheduler/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.434062 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-27mbg_80d7dd41-691a-4411-97c2-91245d43b8ea/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.670818 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5_83fa3cd4-ce6a-44bb-b652-c783504941f9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:04 crc kubenswrapper[4593]: I0129 12:16:04.733764 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/init/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.054162 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/init/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.120236 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/dnsmasq-dns/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.175619 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-g462j_fee0ef55-8edb-456c-9344-98a3b34d3aab/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.417421 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43872652-3bb2-4a5c-9b13-cb25d625cd01/glance-httpd/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.433935 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43872652-3bb2-4a5c-9b13-cb25d625cd01/glance-log/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.662147 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c4f0192e-509d-46a4-9a2a-c82106019381/glance-httpd/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.719947 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c4f0192e-509d-46a4-9a2a-c82106019381/glance-log/0.log" Jan 29 12:16:05 crc kubenswrapper[4593]: I0129 12:16:05.882493 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon/2.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.017337 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon/1.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.210826 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-x2n68_0418390b-7622-490c-ad95-ec5eac075440/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.385438 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p4f88_62d982c9-eb7a-4d9d-9cdd-2248c63b06fb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.420874 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon-log/0.log" Jan 29 12:16:06 crc kubenswrapper[4593]: I0129 12:16:06.808884 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494801-8jgxn_f7d47080-9737-4b86-9e40-a6c6bf7f1709/keystone-cron/0.log" Jan 29 12:16:07 crc kubenswrapper[4593]: I0129 12:16:07.075090 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:07 crc kubenswrapper[4593]: E0129 12:16:07.075401 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:07 crc kubenswrapper[4593]: I0129 12:16:07.205545 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6d0c0ba2-e8ed-4361-8aff-e71714a1617c/kube-state-metrics/0.log" Jan 29 12:16:07 crc kubenswrapper[4593]: I0129 12:16:07.317104 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f96568f6f-lfzv9_e2e767a2-2e4c-4a41-995f-1f0ca9248d1a/keystone-api/0.log" Jan 29 12:16:07 crc kubenswrapper[4593]: I0129 12:16:07.361996 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jt98j_1f7fe168-4498-4002-9233-d6c2d9f115fb/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:08 crc kubenswrapper[4593]: I0129 12:16:08.105612 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct_4c7cff3f-040a-4499-825c-3cccd015326a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:08 crc kubenswrapper[4593]: I0129 12:16:08.306526 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84867bd7b9-4vrb9_174d0d16-4c6e-403a-bf10-0a69b4e98fb1/neutron-httpd/0.log" Jan 29 12:16:08 crc kubenswrapper[4593]: I0129 12:16:08.336442 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84867bd7b9-4vrb9_174d0d16-4c6e-403a-bf10-0a69b4e98fb1/neutron-api/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.020850 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f/nova-cell0-conductor-conductor/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.327589 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bee10dce-c68f-47f4-84e0-623f276964d8/nova-cell1-conductor-conductor/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.701994 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0b25e9a9-4f12-4b7f-9001-74b6c3feb118/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.946620 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d08c570-1374-4c5a-832e-c973d7a39796/nova-api-log/0.log" Jan 29 12:16:09 crc kubenswrapper[4593]: I0129 12:16:09.986535 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rtfdg_f45f3aca-42e1-4105-b843-f5288550ce8c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.141611 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d08c570-1374-4c5a-832e-c973d7a39796/nova-api-api/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.164307 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_dc6f5a6c-3bf0-4f78-89f3-1e2683a37958/memcached/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.286670 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_649faf5c-e6bb-4e3d-8cb5-28c57f100008/nova-metadata-log/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.651400 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/mysql-bootstrap/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.838399 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/mysql-bootstrap/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.858098 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4eff0b9f-e2c4-4ae0-9b42-585f9141d740/nova-scheduler-scheduler/0.log" Jan 29 12:16:10 crc kubenswrapper[4593]: I0129 12:16:10.952936 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/galera/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.122259 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/mysql-bootstrap/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.379994 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_220bdfcb-98c4-4c78-8d95-ea64edfaf1ab/openstackclient/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.410335 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/mysql-bootstrap/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.469991 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/galera/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.519552 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_649faf5c-e6bb-4e3d-8cb5-28c57f100008/nova-metadata-metadata/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.640335 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cc9qq_df5842a4-132b-4c71-a970-efe4f00a3827/ovn-controller/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.714783 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g6lk4_9299d646-8191-4da6-a2d1-d5a8c6492e91/openstack-network-exporter/0.log" Jan 29 12:16:11 crc kubenswrapper[4593]: I0129 12:16:11.882492 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server-init/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.047656 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.065502 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovs-vswitchd/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.099288 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server-init/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.140574 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ftxjl_80db2d7c-94e6-418b-a0b4-2b4064356e4b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.322019 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5320cc21-470d-450c-afa0-c5926e3243c6/openstack-network-exporter/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.378847 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5320cc21-470d-450c-afa0-c5926e3243c6/ovn-northd/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.407987 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fd9a4c00-318d-4bd1-85cb-40971234c3cd/openstack-network-exporter/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.581155 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9/openstack-network-exporter/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.581841 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fd9a4c00-318d-4bd1-85cb-40971234c3cd/ovsdbserver-nb/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.709611 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9/ovsdbserver-sb/0.log" Jan 29 12:16:12 crc kubenswrapper[4593]: I0129 12:16:12.931807 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869645f564-n6fhc_ae8bb4fd-b1d8-4a6a-ac95-9935c4458747/placement-api/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.031079 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869645f564-n6fhc_ae8bb4fd-b1d8-4a6a-ac95-9935c4458747/placement-log/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.357466 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/setup-container/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.544399 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/rabbitmq/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.569413 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/setup-container/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.621071 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/setup-container/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.772195 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/setup-container/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.822714 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/rabbitmq/0.log" Jan 29 12:16:13 crc kubenswrapper[4593]: I0129 12:16:13.910057 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jps44_9a263e61-6654-4030-bd96-c1baa9314111/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.051061 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7tzj5_ce80c16f-5109-46b9-9438-4f05a4132175/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.122274 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb_c3e4e3e3-1994-40a5-bab8-d84db2f44ddb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.157822 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lz46t_b1f286ec-6f85-44c4-94f5-f66bc21c2a64/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.329538 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cfk97_c22e1d76-6585-46e2-9c31-5c002e021882/ssh-known-hosts-edpm-deployment/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.435690 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58d6d94967-wdzcg_f1bc6621-0892-452c-9f95-54554f8c6e68/proxy-server/0.log" Jan 29 12:16:14 crc kubenswrapper[4593]: I0129 12:16:14.547311 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58d6d94967-wdzcg_f1bc6621-0892-452c-9f95-54554f8c6e68/proxy-httpd/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.036430 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jbnzf_4d1e7e96-e120-43f1-bff0-ea3d624e621b/swift-ring-rebalance/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.142454 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-reaper/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.178319 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-auditor/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.259093 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-replicator/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.321684 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-server/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.357711 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-auditor/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.457387 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-server/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.458518 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-updater/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.491436 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-replicator/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.598899 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-expirer/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.621593 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-auditor/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.747330 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-updater/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.765200 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-replicator/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.786046 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-server/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.868235 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/swift-recon-cron/0.log" Jan 29 12:16:15 crc kubenswrapper[4593]: I0129 12:16:15.875330 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/rsync/0.log" Jan 29 12:16:16 crc kubenswrapper[4593]: I0129 12:16:16.115400 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz_ee0ea7fe-3ea4-4944-8101-b03f1566882f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:16 crc kubenswrapper[4593]: I0129 12:16:16.143453 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d5ea9892-a149-4cfe-bb9c-ef636eacd125/tempest-tests-tempest-tests-runner/0.log" Jan 29 12:16:16 crc kubenswrapper[4593]: I0129 12:16:16.294985 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_be3a2ae9-6f0e-459e-bd91-10a92871767c/test-operator-logs-container/0.log" Jan 29 12:16:16 crc kubenswrapper[4593]: I0129 12:16:16.347537 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p_0f5fb9be-3781-4b9a-96d8-705593907345/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:16:21 crc kubenswrapper[4593]: I0129 12:16:21.077674 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:21 crc kubenswrapper[4593]: E0129 12:16:21.078448 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:32 crc kubenswrapper[4593]: I0129 12:16:32.075395 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:32 crc kubenswrapper[4593]: E0129 12:16:32.076381 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:44 crc kubenswrapper[4593]: I0129 12:16:44.853032 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.127778 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.141027 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.183869 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.353441 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.377312 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.382508 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/extract/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.860037 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-7hmqc_e35e9127-0337-4860-b938-bb477a408f1e/manager/0.log" Jan 29 12:16:45 crc kubenswrapper[4593]: I0129 12:16:45.922579 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-7ns7q_c5e6d3a8-d6d9-4445-9708-28b88928333e/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.076343 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:46 crc kubenswrapper[4593]: E0129 12:16:46.076998 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.369539 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-2ml7m_499923d8-4593-4225-bc4c-6166003a0bb3/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.385675 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-xw2pz_734187ee-1e17-4cdc-b3bb-cfbd6e424793/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.569517 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-xqcrc_50471b23-1d0d-4bd9-a66f-a89b3a39a612/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.597105 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-98l2v_50a8381e-e59b-4400-9209-c33ef4f99c23/manager/0.log" Jan 29 12:16:46 crc kubenswrapper[4593]: I0129 12:16:46.922289 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-t584q_812ebcfb-766d-4a1b-aaaa-2dd5a96ce047/manager/0.log" Jan 29 12:16:47 crc kubenswrapper[4593]: I0129 12:16:47.000587 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-6zkvt_c2cda883-37e6-4c21-b320-4962ffdc98b3/manager/0.log" Jan 29 12:16:47 crc kubenswrapper[4593]: I0129 12:16:47.211260 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-c89cq_0881deda-c42a-48d8-9059-b7eaf66c0f9f/manager/0.log" Jan 29 12:16:47 crc kubenswrapper[4593]: I0129 12:16:47.217885 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-xf5fn_cdb96936-cd34-44fd-94b5-5570688fdfe6/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.175648 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-qt87l_336c4e93-7d0b-4570-aafc-22e0f812db12/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.223758 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-zx6r8_62efedcb-a194-4692-8e84-a0da7777a400/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.434679 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-8kf6p_40ab1792-0354-4c78-ac44-a217fbc610a9/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.507083 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-9dbds_ba6fb45a-59ff-42ee-acb0-0ee43d001e1e/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.740652 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb_f6e2fc57-0cce-4f5a-bf3e-63efbfff1073/manager/0.log" Jan 29 12:16:48 crc kubenswrapper[4593]: I0129 12:16:48.915663 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-55ccc59995-d7xm7_c8e623f1-2830-4c78-b17a-6000f32978a3/operator/0.log" Jan 29 12:16:49 crc kubenswrapper[4593]: I0129 12:16:49.263709 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sbxwt_0661b605-afb6-4341-9703-ea25a3afc19d/registry-server/0.log" Jan 29 12:16:49 crc kubenswrapper[4593]: I0129 12:16:49.677134 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-kttv8_2c7ec826-43f0-49f3-9d96-4330427e4ed9/manager/0.log" Jan 29 12:16:49 crc kubenswrapper[4593]: I0129 12:16:49.681757 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-885pn_9b88fe2c-a82a-4284-961a-8af3818815ec/manager/0.log" Jan 29 12:16:49 crc kubenswrapper[4593]: I0129 12:16:49.996492 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-k4b7q_0e86fa54-1e41-4bb9-86c7-a0ea0d919270/manager/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.001900 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tfkk2_2f32633b-0490-4885-9543-a140c807c742/operator/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.115790 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d898fd894-sh94p_960bb326-dc22-43e5-bc4f-05c9ce9e26a9/manager/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.477012 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:16:50 crc kubenswrapper[4593]: E0129 12:16:50.477367 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41e742a-4985-4b87-8a5b-6a7586971569" containerName="container-00" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.477379 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41e742a-4985-4b87-8a5b-6a7586971569" containerName="container-00" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.477571 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41e742a-4985-4b87-8a5b-6a7586971569" containerName="container-00" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.478833 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.499741 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.538952 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.539004 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.539094 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.646797 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.646857 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.646957 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.647520 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.648133 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.684805 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") pod \"redhat-operators-zvj4k\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.764058 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-z4mp8_ea8d9bb8-bdec-453d-a308-28b962971254/manager/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.796080 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.881352 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ltfr4_b45fb247-850e-40b4-b62e-8551d55efcba/manager/0.log" Jan 29 12:16:50 crc kubenswrapper[4593]: I0129 12:16:50.987204 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-zmssx_0259a320-8da9-48e5-8d73-25b09774d9c1/manager/0.log" Jan 29 12:16:51 crc kubenswrapper[4593]: I0129 12:16:51.323659 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:16:51 crc kubenswrapper[4593]: I0129 12:16:51.504458 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerStarted","Data":"332780bb5ef29b3dd0853836a33ab4697026e10c50ef91e921d4a17666a2c402"} Jan 29 12:16:52 crc kubenswrapper[4593]: I0129 12:16:52.515248 4593 generic.go:334] "Generic (PLEG): container finished" podID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerID="3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23" exitCode=0 Jan 29 12:16:52 crc kubenswrapper[4593]: I0129 12:16:52.515284 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerDied","Data":"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23"} Jan 29 12:16:52 crc kubenswrapper[4593]: I0129 12:16:52.517510 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:16:53 crc kubenswrapper[4593]: I0129 12:16:53.527767 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerStarted","Data":"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c"} Jan 29 12:16:59 crc kubenswrapper[4593]: I0129 12:16:59.074949 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:16:59 crc kubenswrapper[4593]: E0129 12:16:59.075687 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:16:59 crc kubenswrapper[4593]: I0129 12:16:59.598301 4593 generic.go:334] "Generic (PLEG): container finished" podID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerID="57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c" exitCode=0 Jan 29 12:16:59 crc kubenswrapper[4593]: I0129 12:16:59.598384 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerDied","Data":"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c"} Jan 29 12:17:00 crc kubenswrapper[4593]: I0129 12:17:00.609786 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerStarted","Data":"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9"} Jan 29 12:17:00 crc kubenswrapper[4593]: I0129 12:17:00.640385 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zvj4k" podStartSLOduration=3.146868685 podStartE2EDuration="10.640365116s" podCreationTimestamp="2026-01-29 12:16:50 +0000 UTC" firstStartedPulling="2026-01-29 12:16:52.517092663 +0000 UTC m=+4678.390126854" lastFinishedPulling="2026-01-29 12:17:00.010589094 +0000 UTC m=+4685.883623285" observedRunningTime="2026-01-29 12:17:00.632682698 +0000 UTC m=+4686.505716909" watchObservedRunningTime="2026-01-29 12:17:00.640365116 +0000 UTC m=+4686.513399307" Jan 29 12:17:00 crc kubenswrapper[4593]: I0129 12:17:00.797100 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:00 crc kubenswrapper[4593]: I0129 12:17:00.797295 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:01 crc kubenswrapper[4593]: I0129 12:17:01.854734 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvj4k" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" probeResult="failure" output=< Jan 29 12:17:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:17:01 crc kubenswrapper[4593]: > Jan 29 12:17:10 crc kubenswrapper[4593]: I0129 12:17:10.075187 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:10 crc kubenswrapper[4593]: E0129 12:17:10.076000 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:17:11 crc kubenswrapper[4593]: I0129 12:17:11.843869 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvj4k" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" probeResult="failure" output=< Jan 29 12:17:11 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:17:11 crc kubenswrapper[4593]: > Jan 29 12:17:17 crc kubenswrapper[4593]: I0129 12:17:17.098592 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pf5p2_9bce548b-2c64-4ac5-a797-979de4cf7656/control-plane-machine-set-operator/0.log" Jan 29 12:17:17 crc kubenswrapper[4593]: I0129 12:17:17.404837 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vtdww_bb259eac-6aa7-42d9-883b-2af6b63af4b8/kube-rbac-proxy/0.log" Jan 29 12:17:17 crc kubenswrapper[4593]: I0129 12:17:17.431594 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vtdww_bb259eac-6aa7-42d9-883b-2af6b63af4b8/machine-api-operator/0.log" Jan 29 12:17:21 crc kubenswrapper[4593]: I0129 12:17:21.075082 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:21 crc kubenswrapper[4593]: E0129 12:17:21.076170 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:17:21 crc kubenswrapper[4593]: I0129 12:17:21.846547 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zvj4k" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" probeResult="failure" output=< Jan 29 12:17:21 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:17:21 crc kubenswrapper[4593]: > Jan 29 12:17:30 crc kubenswrapper[4593]: I0129 12:17:30.862181 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:30 crc kubenswrapper[4593]: I0129 12:17:30.919357 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:31 crc kubenswrapper[4593]: I0129 12:17:31.111165 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:17:31 crc kubenswrapper[4593]: I0129 12:17:31.921023 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zvj4k" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" containerID="cri-o://ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" gracePeriod=2 Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.432927 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.472945 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") pod \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.473051 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") pod \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.473151 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") pod \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\" (UID: \"3950981d-ad0a-47e1-b5a2-da040c9c3e49\") " Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.473774 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities" (OuterVolumeSpecName: "utilities") pod "3950981d-ad0a-47e1-b5a2-da040c9c3e49" (UID: "3950981d-ad0a-47e1-b5a2-da040c9c3e49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.501941 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx" (OuterVolumeSpecName: "kube-api-access-lzjsx") pod "3950981d-ad0a-47e1-b5a2-da040c9c3e49" (UID: "3950981d-ad0a-47e1-b5a2-da040c9c3e49"). InnerVolumeSpecName "kube-api-access-lzjsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.575483 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzjsx\" (UniqueName: \"kubernetes.io/projected/3950981d-ad0a-47e1-b5a2-da040c9c3e49-kube-api-access-lzjsx\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.575887 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.698884 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3950981d-ad0a-47e1-b5a2-da040c9c3e49" (UID: "3950981d-ad0a-47e1-b5a2-da040c9c3e49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.801107 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3950981d-ad0a-47e1-b5a2-da040c9c3e49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.894499 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qhfhj_59d387c2-4d0b-4d6c-a0d8-2230657bebd0/cert-manager-controller/0.log" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935620 4593 generic.go:334] "Generic (PLEG): container finished" podID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerID="ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" exitCode=0 Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935680 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerDied","Data":"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9"} Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935712 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zvj4k" event={"ID":"3950981d-ad0a-47e1-b5a2-da040c9c3e49","Type":"ContainerDied","Data":"332780bb5ef29b3dd0853836a33ab4697026e10c50ef91e921d4a17666a2c402"} Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935731 4593 scope.go:117] "RemoveContainer" containerID="ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.935904 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zvj4k" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.973425 4593 scope.go:117] "RemoveContainer" containerID="57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c" Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.990963 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:17:32 crc kubenswrapper[4593]: I0129 12:17:32.998561 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zvj4k"] Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.008868 4593 scope.go:117] "RemoveContainer" containerID="3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.049781 4593 scope.go:117] "RemoveContainer" containerID="ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" Jan 29 12:17:33 crc kubenswrapper[4593]: E0129 12:17:33.054174 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9\": container with ID starting with ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9 not found: ID does not exist" containerID="ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.054382 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9"} err="failed to get container status \"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9\": rpc error: code = NotFound desc = could not find container \"ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9\": container with ID starting with ead35afda6b94383f8202b4c4320d9272303c14a494cbbb2916716e5b89d21d9 not found: ID does not exist" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.054481 4593 scope.go:117] "RemoveContainer" containerID="57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c" Jan 29 12:17:33 crc kubenswrapper[4593]: E0129 12:17:33.054873 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c\": container with ID starting with 57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c not found: ID does not exist" containerID="57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.054926 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c"} err="failed to get container status \"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c\": rpc error: code = NotFound desc = could not find container \"57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c\": container with ID starting with 57c19851d986daa7ca568fca1eea28d39b6c5f81f046ce453505615f2577774c not found: ID does not exist" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.054954 4593 scope.go:117] "RemoveContainer" containerID="3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23" Jan 29 12:17:33 crc kubenswrapper[4593]: E0129 12:17:33.055180 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23\": container with ID starting with 3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23 not found: ID does not exist" containerID="3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.055211 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23"} err="failed to get container status \"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23\": rpc error: code = NotFound desc = could not find container \"3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23\": container with ID starting with 3b8e38a89d9a46d1986494a648468a2e3f120a9158adfe071e37653dcbf89f23 not found: ID does not exist" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.078936 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:33 crc kubenswrapper[4593]: E0129 12:17:33.079325 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.084838 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" path="/var/lib/kubelet/pods/3950981d-ad0a-47e1-b5a2-da040c9c3e49/volumes" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.171262 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lw7j7_79aa2cc5-a031-412d-a4c7-ba9251f84ed6/cert-manager-cainjector/0.log" Jan 29 12:17:33 crc kubenswrapper[4593]: I0129 12:17:33.219891 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-t7s4r_e2b5756a-c46e-4e76-90bf-0a5c7c1dc759/cert-manager-webhook/0.log" Jan 29 12:17:46 crc kubenswrapper[4593]: I0129 12:17:46.075854 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:46 crc kubenswrapper[4593]: E0129 12:17:46.076697 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:17:47 crc kubenswrapper[4593]: I0129 12:17:47.694465 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nck62_2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2/nmstate-console-plugin/0.log" Jan 29 12:17:47 crc kubenswrapper[4593]: I0129 12:17:47.938591 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-q2995_7a32568f-244c-432b-8186-683e8bc10371/nmstate-metrics/0.log" Jan 29 12:17:48 crc kubenswrapper[4593]: I0129 12:17:48.029746 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q2lbc_ea391d24-e32c-440b-b5c2-218920192125/nmstate-handler/0.log" Jan 29 12:17:48 crc kubenswrapper[4593]: I0129 12:17:48.037093 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-q2995_7a32568f-244c-432b-8186-683e8bc10371/kube-rbac-proxy/0.log" Jan 29 12:17:48 crc kubenswrapper[4593]: I0129 12:17:48.191914 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-xmhmc_b2e0c4ff-8a2b-474d-8414-a0026d61b11e/nmstate-operator/0.log" Jan 29 12:17:48 crc kubenswrapper[4593]: I0129 12:17:48.286532 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-47n46_72d4f068-dc20-44d0-aca6-c8f0992536e6/nmstate-webhook/0.log" Jan 29 12:17:59 crc kubenswrapper[4593]: I0129 12:17:59.079268 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:17:59 crc kubenswrapper[4593]: E0129 12:17:59.079981 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:13 crc kubenswrapper[4593]: I0129 12:18:13.076350 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:18:13 crc kubenswrapper[4593]: E0129 12:18:13.077227 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:17 crc kubenswrapper[4593]: I0129 12:18:17.124565 4593 scope.go:117] "RemoveContainer" containerID="d95a803073d6be732010713f64b21e2542e0573ccca5a3e98a37ffc8b97ffb0a" Jan 29 12:18:17 crc kubenswrapper[4593]: I0129 12:18:17.157510 4593 scope.go:117] "RemoveContainer" containerID="e974cfd4ba99c10cc2aad6fe3294ee279ef945d78da77b5575efff84d75dc3f5" Jan 29 12:18:17 crc kubenswrapper[4593]: I0129 12:18:17.196978 4593 scope.go:117] "RemoveContainer" containerID="9721d75f517671802e10383aaf0d51740b457133fabbb1bb0666df1729b46536" Jan 29 12:18:23 crc kubenswrapper[4593]: I0129 12:18:23.578190 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hvqbg_3462ad7c-24f3-4c73-990d-a0f471d08d1d/kube-rbac-proxy/0.log" Jan 29 12:18:23 crc kubenswrapper[4593]: I0129 12:18:23.722622 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hvqbg_3462ad7c-24f3-4c73-990d-a0f471d08d1d/controller/0.log" Jan 29 12:18:23 crc kubenswrapper[4593]: I0129 12:18:23.772268 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.129245 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.182399 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.186730 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.251603 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.415766 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.517231 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.537566 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.560503 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.785622 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.808555 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.853250 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/controller/0.log" Jan 29 12:18:24 crc kubenswrapper[4593]: I0129 12:18:24.879539 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.038748 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/frr-metrics/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.081799 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:18:25 crc kubenswrapper[4593]: E0129 12:18:25.082133 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.160204 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/kube-rbac-proxy-frr/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.234847 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/kube-rbac-proxy/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.482235 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/reloader/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.654578 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dj42h_45d808cf-80c4-4f7b-a172-76e4ecd9e37b/frr-k8s-webhook-server/0.log" Jan 29 12:18:25 crc kubenswrapper[4593]: I0129 12:18:25.990426 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bf4d9f4bd-ll9bk_421156e9-d8d3-4112-bd58-d09c40a70a12/manager/0.log" Jan 29 12:18:26 crc kubenswrapper[4593]: I0129 12:18:26.133022 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fdc78c47c-w2tv4_c3381187-83f6-4877-8d72-3ed30f360a86/webhook-server/0.log" Jan 29 12:18:26 crc kubenswrapper[4593]: I0129 12:18:26.439835 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m77zw_37969e5d-3111-45cc-a711-da443a473c52/kube-rbac-proxy/0.log" Jan 29 12:18:26 crc kubenswrapper[4593]: I0129 12:18:26.477439 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/frr/0.log" Jan 29 12:18:26 crc kubenswrapper[4593]: I0129 12:18:26.766893 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m77zw_37969e5d-3111-45cc-a711-da443a473c52/speaker/0.log" Jan 29 12:18:39 crc kubenswrapper[4593]: I0129 12:18:39.078128 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:18:39 crc kubenswrapper[4593]: E0129 12:18:39.078889 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.050006 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.376262 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.443316 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.443513 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.489781 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.587849 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.842708 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/extract/0.log" Jan 29 12:18:42 crc kubenswrapper[4593]: I0129 12:18:42.868720 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.105188 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.112074 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.116073 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.286787 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.325736 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/extract/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.356072 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:18:43 crc kubenswrapper[4593]: I0129 12:18:43.944686 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.199842 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.207117 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.207621 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.404943 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.438860 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:18:44 crc kubenswrapper[4593]: I0129 12:18:44.705561 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.132976 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/registry-server/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.155661 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.170780 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.173796 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.370179 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.380594 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.578912 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s2rlp_7a59fe58-c900-46ea-8ff2-8a7f49210dc3/marketplace-operator/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.720474 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.970426 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:18:45 crc kubenswrapper[4593]: I0129 12:18:45.970427 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.029403 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/registry-server/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.073532 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.217932 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.235356 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:18:46 crc kubenswrapper[4593]: I0129 12:18:46.330484 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:46.502591 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:46.522777 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:46.724867 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:46.729844 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:47.139287 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:47.301641 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/registry-server/0.log" Jan 29 12:18:47 crc kubenswrapper[4593]: I0129 12:18:47.774912 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/registry-server/0.log" Jan 29 12:18:53 crc kubenswrapper[4593]: I0129 12:18:53.075449 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:18:53 crc kubenswrapper[4593]: E0129 12:18:53.076198 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:18:54 crc kubenswrapper[4593]: I0129 12:18:54.771512 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="8581bb16-8d35-4521-8886-3c71554a3a4d" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 29 12:18:56 crc kubenswrapper[4593]: I0129 12:18:56.852828 4593 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-47n46 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:18:56 crc kubenswrapper[4593]: I0129 12:18:56.853344 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-47n46" podUID="72d4f068-dc20-44d0-aca6-c8f0992536e6" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.32:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:19:08 crc kubenswrapper[4593]: I0129 12:19:08.075327 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:08 crc kubenswrapper[4593]: E0129 12:19:08.076177 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:19:19 crc kubenswrapper[4593]: I0129 12:19:19.083983 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:19 crc kubenswrapper[4593]: E0129 12:19:19.084944 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:19:31 crc kubenswrapper[4593]: I0129 12:19:31.075168 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:31 crc kubenswrapper[4593]: E0129 12:19:31.076069 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:19:43 crc kubenswrapper[4593]: I0129 12:19:43.083263 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:43 crc kubenswrapper[4593]: E0129 12:19:43.084218 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:19:58 crc kubenswrapper[4593]: I0129 12:19:58.076013 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:19:58 crc kubenswrapper[4593]: E0129 12:19:58.076940 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:20:11 crc kubenswrapper[4593]: I0129 12:20:11.075848 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:20:12 crc kubenswrapper[4593]: I0129 12:20:12.119123 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27"} Jan 29 12:21:17 crc kubenswrapper[4593]: I0129 12:21:17.321372 4593 scope.go:117] "RemoveContainer" containerID="71a0e35a9b97791cdb2e7a3a0e49f82c96b3918bca79faeaea9323664e2cf8c6" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.613870 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:33 crc kubenswrapper[4593]: E0129 12:21:33.615339 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="extract-utilities" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.615369 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="extract-utilities" Jan 29 12:21:33 crc kubenswrapper[4593]: E0129 12:21:33.615382 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="extract-content" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.615389 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="extract-content" Jan 29 12:21:33 crc kubenswrapper[4593]: E0129 12:21:33.615415 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.615424 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.615662 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="3950981d-ad0a-47e1-b5a2-da040c9c3e49" containerName="registry-server" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.617054 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.662143 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.783763 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.783895 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.783958 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.885691 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.885781 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.885817 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.886324 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.886675 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.909502 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") pod \"certified-operators-fhwxm\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.945210 4593 generic.go:334] "Generic (PLEG): container finished" podID="006cda43-0b58-4970-bcf0-c355509620f8" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" exitCode=0 Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.945293 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zc4pg/must-gather-htdlp" event={"ID":"006cda43-0b58-4970-bcf0-c355509620f8","Type":"ContainerDied","Data":"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76"} Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.946048 4593 scope.go:117] "RemoveContainer" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" Jan 29 12:21:33 crc kubenswrapper[4593]: I0129 12:21:33.963213 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:34 crc kubenswrapper[4593]: I0129 12:21:34.647672 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:34 crc kubenswrapper[4593]: I0129 12:21:34.775194 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zc4pg_must-gather-htdlp_006cda43-0b58-4970-bcf0-c355509620f8/gather/0.log" Jan 29 12:21:34 crc kubenswrapper[4593]: I0129 12:21:34.957515 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerStarted","Data":"4e2df6b4721a8d473a96f101f336863a5ef2eb9c2ef8535919425422543b4bda"} Jan 29 12:21:34 crc kubenswrapper[4593]: I0129 12:21:34.957560 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerStarted","Data":"e20ec106468d262fa4bc5b0870a4ccc7cc66d00dbc9cc0aea978c890696a3eae"} Jan 29 12:21:36 crc kubenswrapper[4593]: I0129 12:21:36.000434 4593 generic.go:334] "Generic (PLEG): container finished" podID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerID="4e2df6b4721a8d473a96f101f336863a5ef2eb9c2ef8535919425422543b4bda" exitCode=0 Jan 29 12:21:36 crc kubenswrapper[4593]: I0129 12:21:36.000847 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerDied","Data":"4e2df6b4721a8d473a96f101f336863a5ef2eb9c2ef8535919425422543b4bda"} Jan 29 12:21:37 crc kubenswrapper[4593]: I0129 12:21:37.013427 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerStarted","Data":"7d1ea073d8cea1ae501e5f4b6fc119c0435003af8966c47bea4400a1082dae38"} Jan 29 12:21:39 crc kubenswrapper[4593]: I0129 12:21:39.033217 4593 generic.go:334] "Generic (PLEG): container finished" podID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerID="7d1ea073d8cea1ae501e5f4b6fc119c0435003af8966c47bea4400a1082dae38" exitCode=0 Jan 29 12:21:39 crc kubenswrapper[4593]: I0129 12:21:39.033292 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerDied","Data":"7d1ea073d8cea1ae501e5f4b6fc119c0435003af8966c47bea4400a1082dae38"} Jan 29 12:21:40 crc kubenswrapper[4593]: I0129 12:21:40.046783 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerStarted","Data":"15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718"} Jan 29 12:21:40 crc kubenswrapper[4593]: I0129 12:21:40.070551 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fhwxm" podStartSLOduration=3.3962060530000002 podStartE2EDuration="7.070505955s" podCreationTimestamp="2026-01-29 12:21:33 +0000 UTC" firstStartedPulling="2026-01-29 12:21:36.004915946 +0000 UTC m=+4961.877950137" lastFinishedPulling="2026-01-29 12:21:39.679215848 +0000 UTC m=+4965.552250039" observedRunningTime="2026-01-29 12:21:40.067190965 +0000 UTC m=+4965.940225156" watchObservedRunningTime="2026-01-29 12:21:40.070505955 +0000 UTC m=+4965.943540146" Jan 29 12:21:43 crc kubenswrapper[4593]: I0129 12:21:43.963996 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:43 crc kubenswrapper[4593]: I0129 12:21:43.964702 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.023448 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.134330 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.324434 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.324886 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zc4pg/must-gather-htdlp" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="copy" containerID="cri-o://0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" gracePeriod=2 Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.337030 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zc4pg/must-gather-htdlp"] Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.819158 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zc4pg_must-gather-htdlp_006cda43-0b58-4970-bcf0-c355509620f8/copy/0.log" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.820037 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.943566 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") pod \"006cda43-0b58-4970-bcf0-c355509620f8\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.948039 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") pod \"006cda43-0b58-4970-bcf0-c355509620f8\" (UID: \"006cda43-0b58-4970-bcf0-c355509620f8\") " Jan 29 12:21:44 crc kubenswrapper[4593]: I0129 12:21:44.971917 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t" (OuterVolumeSpecName: "kube-api-access-lln5t") pod "006cda43-0b58-4970-bcf0-c355509620f8" (UID: "006cda43-0b58-4970-bcf0-c355509620f8"). InnerVolumeSpecName "kube-api-access-lln5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.053190 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lln5t\" (UniqueName: \"kubernetes.io/projected/006cda43-0b58-4970-bcf0-c355509620f8-kube-api-access-lln5t\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.109468 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zc4pg_must-gather-htdlp_006cda43-0b58-4970-bcf0-c355509620f8/copy/0.log" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.110121 4593 generic.go:334] "Generic (PLEG): container finished" podID="006cda43-0b58-4970-bcf0-c355509620f8" containerID="0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" exitCode=143 Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.111450 4593 scope.go:117] "RemoveContainer" containerID="0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.111802 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zc4pg/must-gather-htdlp" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.194155 4593 scope.go:117] "RemoveContainer" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.195206 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.328443 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "006cda43-0b58-4970-bcf0-c355509620f8" (UID: "006cda43-0b58-4970-bcf0-c355509620f8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.337445 4593 scope.go:117] "RemoveContainer" containerID="0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" Jan 29 12:21:45 crc kubenswrapper[4593]: E0129 12:21:45.338480 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5\": container with ID starting with 0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5 not found: ID does not exist" containerID="0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.338528 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5"} err="failed to get container status \"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5\": rpc error: code = NotFound desc = could not find container \"0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5\": container with ID starting with 0a2615ec02f7acf6e4eef7d334633a655b2c7f91120bb732e5f28991053841a5 not found: ID does not exist" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.338551 4593 scope.go:117] "RemoveContainer" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" Jan 29 12:21:45 crc kubenswrapper[4593]: E0129 12:21:45.342920 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76\": container with ID starting with 46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76 not found: ID does not exist" containerID="46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.342969 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76"} err="failed to get container status \"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76\": rpc error: code = NotFound desc = could not find container \"46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76\": container with ID starting with 46cdce02a2dbb7b4a939e2cdd7a751400cc8c8329f7b96782ad4b1979b724c76 not found: ID does not exist" Jan 29 12:21:45 crc kubenswrapper[4593]: I0129 12:21:45.377834 4593 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/006cda43-0b58-4970-bcf0-c355509620f8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:46 crc kubenswrapper[4593]: I0129 12:21:46.120021 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fhwxm" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="registry-server" containerID="cri-o://15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718" gracePeriod=2 Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.089175 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006cda43-0b58-4970-bcf0-c355509620f8" path="/var/lib/kubelet/pods/006cda43-0b58-4970-bcf0-c355509620f8/volumes" Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.133671 4593 generic.go:334] "Generic (PLEG): container finished" podID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerID="15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718" exitCode=0 Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.133738 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerDied","Data":"15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718"} Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.741801 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.924425 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") pod \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.924574 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") pod \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.924722 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") pod \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\" (UID: \"544e38ca-9cdb-4ca1-82b9-dd6290b12428\") " Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.925492 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities" (OuterVolumeSpecName: "utilities") pod "544e38ca-9cdb-4ca1-82b9-dd6290b12428" (UID: "544e38ca-9cdb-4ca1-82b9-dd6290b12428"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:47 crc kubenswrapper[4593]: I0129 12:21:47.932039 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z" (OuterVolumeSpecName: "kube-api-access-7cx7z") pod "544e38ca-9cdb-4ca1-82b9-dd6290b12428" (UID: "544e38ca-9cdb-4ca1-82b9-dd6290b12428"). InnerVolumeSpecName "kube-api-access-7cx7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.027210 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cx7z\" (UniqueName: \"kubernetes.io/projected/544e38ca-9cdb-4ca1-82b9-dd6290b12428-kube-api-access-7cx7z\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.027246 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.146261 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fhwxm" event={"ID":"544e38ca-9cdb-4ca1-82b9-dd6290b12428","Type":"ContainerDied","Data":"e20ec106468d262fa4bc5b0870a4ccc7cc66d00dbc9cc0aea978c890696a3eae"} Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.146325 4593 scope.go:117] "RemoveContainer" containerID="15bd3dfc93df9578c6c17be7ac613b236f76f8886d45c782d3038d688f30e718" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.147045 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fhwxm" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.167791 4593 scope.go:117] "RemoveContainer" containerID="7d1ea073d8cea1ae501e5f4b6fc119c0435003af8966c47bea4400a1082dae38" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.654875 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "544e38ca-9cdb-4ca1-82b9-dd6290b12428" (UID: "544e38ca-9cdb-4ca1-82b9-dd6290b12428"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.690223 4593 scope.go:117] "RemoveContainer" containerID="4e2df6b4721a8d473a96f101f336863a5ef2eb9c2ef8535919425422543b4bda" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.741012 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/544e38ca-9cdb-4ca1-82b9-dd6290b12428-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.785149 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:48 crc kubenswrapper[4593]: I0129 12:21:48.795258 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fhwxm"] Jan 29 12:21:49 crc kubenswrapper[4593]: I0129 12:21:49.085989 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" path="/var/lib/kubelet/pods/544e38ca-9cdb-4ca1-82b9-dd6290b12428/volumes" Jan 29 12:22:17 crc kubenswrapper[4593]: I0129 12:22:17.386135 4593 scope.go:117] "RemoveContainer" containerID="54ccd1935e3e2e3e59738afad3c9d5c99134092f1b5fc8efa7667569d5fe3894" Jan 29 12:22:33 crc kubenswrapper[4593]: I0129 12:22:33.946177 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:22:33 crc kubenswrapper[4593]: I0129 12:22:33.946812 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:03 crc kubenswrapper[4593]: I0129 12:23:03.958592 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:23:03 crc kubenswrapper[4593]: I0129 12:23:03.959278 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.945800 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.946392 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.946453 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.947275 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:23:33 crc kubenswrapper[4593]: I0129 12:23:33.947329 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27" gracePeriod=600 Jan 29 12:23:34 crc kubenswrapper[4593]: I0129 12:23:34.133945 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27" exitCode=0 Jan 29 12:23:34 crc kubenswrapper[4593]: I0129 12:23:34.134006 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27"} Jan 29 12:23:34 crc kubenswrapper[4593]: I0129 12:23:34.134045 4593 scope.go:117] "RemoveContainer" containerID="00e338889bdcd53096b3fa83abdc39c9d6997d711f77760aefb9f99019aa9b3e" Jan 29 12:23:35 crc kubenswrapper[4593]: I0129 12:23:35.144188 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f"} Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.187357 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188361 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="extract-content" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188380 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="extract-content" Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188395 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="copy" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188402 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="copy" Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188418 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="registry-server" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188423 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="registry-server" Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188438 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="gather" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188443 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="gather" Jan 29 12:24:09 crc kubenswrapper[4593]: E0129 12:24:09.188453 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="extract-utilities" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188459 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="extract-utilities" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188724 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="copy" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188744 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="006cda43-0b58-4970-bcf0-c355509620f8" containerName="gather" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.188758 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="544e38ca-9cdb-4ca1-82b9-dd6290b12428" containerName="registry-server" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.190215 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.203748 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.261380 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.261489 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.261808 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.363393 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.364063 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.364108 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.364139 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.364573 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:09 crc kubenswrapper[4593]: I0129 12:24:09.868174 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") pod \"redhat-marketplace-gh6r5\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:10 crc kubenswrapper[4593]: I0129 12:24:10.117322 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:10 crc kubenswrapper[4593]: I0129 12:24:10.578877 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:11 crc kubenswrapper[4593]: I0129 12:24:11.510858 4593 generic.go:334] "Generic (PLEG): container finished" podID="37487459-95b3-4700-85d3-8eae3d218459" containerID="16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7" exitCode=0 Jan 29 12:24:11 crc kubenswrapper[4593]: I0129 12:24:11.511224 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerDied","Data":"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7"} Jan 29 12:24:11 crc kubenswrapper[4593]: I0129 12:24:11.511274 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerStarted","Data":"d4277f5a84556bab91331ef8c9c210c90b196f2deb075bbaeb81e6199c759bee"} Jan 29 12:24:11 crc kubenswrapper[4593]: I0129 12:24:11.515328 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:24:13 crc kubenswrapper[4593]: I0129 12:24:13.579213 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerStarted","Data":"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24"} Jan 29 12:24:14 crc kubenswrapper[4593]: I0129 12:24:14.591594 4593 generic.go:334] "Generic (PLEG): container finished" podID="37487459-95b3-4700-85d3-8eae3d218459" containerID="0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24" exitCode=0 Jan 29 12:24:14 crc kubenswrapper[4593]: I0129 12:24:14.591685 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerDied","Data":"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24"} Jan 29 12:24:15 crc kubenswrapper[4593]: I0129 12:24:15.607462 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerStarted","Data":"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e"} Jan 29 12:24:15 crc kubenswrapper[4593]: I0129 12:24:15.641521 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gh6r5" podStartSLOduration=2.954399756 podStartE2EDuration="6.641493715s" podCreationTimestamp="2026-01-29 12:24:09 +0000 UTC" firstStartedPulling="2026-01-29 12:24:11.513675139 +0000 UTC m=+5117.386709330" lastFinishedPulling="2026-01-29 12:24:15.200769098 +0000 UTC m=+5121.073803289" observedRunningTime="2026-01-29 12:24:15.62361359 +0000 UTC m=+5121.496647781" watchObservedRunningTime="2026-01-29 12:24:15.641493715 +0000 UTC m=+5121.514527906" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.404589 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.406855 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.426233 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.477056 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.477375 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.477678 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579020 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579086 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579131 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579824 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.579921 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.602268 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") pod \"community-operators-fsx2j\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:17 crc kubenswrapper[4593]: I0129 12:24:17.728232 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:18 crc kubenswrapper[4593]: I0129 12:24:18.356675 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:18 crc kubenswrapper[4593]: I0129 12:24:18.632969 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerStarted","Data":"209ffbcc1a678a7d65c8310cd83d69a1db8590a0079496bbe454339367ab236f"} Jan 29 12:24:19 crc kubenswrapper[4593]: I0129 12:24:19.644218 4593 generic.go:334] "Generic (PLEG): container finished" podID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerID="4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff" exitCode=0 Jan 29 12:24:19 crc kubenswrapper[4593]: I0129 12:24:19.644275 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerDied","Data":"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff"} Jan 29 12:24:20 crc kubenswrapper[4593]: I0129 12:24:20.118189 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:20 crc kubenswrapper[4593]: I0129 12:24:20.118551 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:20 crc kubenswrapper[4593]: I0129 12:24:20.171306 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:20 crc kubenswrapper[4593]: I0129 12:24:20.708673 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:21 crc kubenswrapper[4593]: I0129 12:24:21.664884 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerStarted","Data":"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030"} Jan 29 12:24:21 crc kubenswrapper[4593]: I0129 12:24:21.783049 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:22 crc kubenswrapper[4593]: I0129 12:24:22.673766 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gh6r5" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="registry-server" containerID="cri-o://325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" gracePeriod=2 Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.637023 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.688489 4593 generic.go:334] "Generic (PLEG): container finished" podID="37487459-95b3-4700-85d3-8eae3d218459" containerID="325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" exitCode=0 Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.688541 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerDied","Data":"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e"} Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.689541 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gh6r5" event={"ID":"37487459-95b3-4700-85d3-8eae3d218459","Type":"ContainerDied","Data":"d4277f5a84556bab91331ef8c9c210c90b196f2deb075bbaeb81e6199c759bee"} Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.688591 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gh6r5" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.689672 4593 scope.go:117] "RemoveContainer" containerID="325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.777435 4593 scope.go:117] "RemoveContainer" containerID="0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.793109 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") pod \"37487459-95b3-4700-85d3-8eae3d218459\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.793417 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") pod \"37487459-95b3-4700-85d3-8eae3d218459\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.793540 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") pod \"37487459-95b3-4700-85d3-8eae3d218459\" (UID: \"37487459-95b3-4700-85d3-8eae3d218459\") " Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.795477 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities" (OuterVolumeSpecName: "utilities") pod "37487459-95b3-4700-85d3-8eae3d218459" (UID: "37487459-95b3-4700-85d3-8eae3d218459"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.803425 4593 scope.go:117] "RemoveContainer" containerID="16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.812174 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44" (OuterVolumeSpecName: "kube-api-access-cxw44") pod "37487459-95b3-4700-85d3-8eae3d218459" (UID: "37487459-95b3-4700-85d3-8eae3d218459"). InnerVolumeSpecName "kube-api-access-cxw44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.825329 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37487459-95b3-4700-85d3-8eae3d218459" (UID: "37487459-95b3-4700-85d3-8eae3d218459"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.894888 4593 scope.go:117] "RemoveContainer" containerID="325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" Jan 29 12:24:23 crc kubenswrapper[4593]: E0129 12:24:23.895470 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e\": container with ID starting with 325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e not found: ID does not exist" containerID="325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.895508 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e"} err="failed to get container status \"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e\": rpc error: code = NotFound desc = could not find container \"325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e\": container with ID starting with 325a85e9886c75ed2187dc83272bbd450c195c3e493c5fe74506903d56e3e96e not found: ID does not exist" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.895534 4593 scope.go:117] "RemoveContainer" containerID="0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24" Jan 29 12:24:23 crc kubenswrapper[4593]: E0129 12:24:23.896043 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24\": container with ID starting with 0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24 not found: ID does not exist" containerID="0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.896160 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24"} err="failed to get container status \"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24\": rpc error: code = NotFound desc = could not find container \"0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24\": container with ID starting with 0650d061520731e0c2ff467cec3ad7f7b669bf60c95dcf416854747e15c07d24 not found: ID does not exist" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.896259 4593 scope.go:117] "RemoveContainer" containerID="16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7" Jan 29 12:24:23 crc kubenswrapper[4593]: E0129 12:24:23.896606 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7\": container with ID starting with 16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7 not found: ID does not exist" containerID="16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.896657 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7"} err="failed to get container status \"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7\": rpc error: code = NotFound desc = could not find container \"16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7\": container with ID starting with 16123fd659218b3e8a0deecd934f827d98be0eb2152755b56cc90cf8cf2148e7 not found: ID does not exist" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.897402 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.897486 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxw44\" (UniqueName: \"kubernetes.io/projected/37487459-95b3-4700-85d3-8eae3d218459-kube-api-access-cxw44\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:23 crc kubenswrapper[4593]: I0129 12:24:23.897555 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37487459-95b3-4700-85d3-8eae3d218459-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:24 crc kubenswrapper[4593]: I0129 12:24:24.032766 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:24 crc kubenswrapper[4593]: I0129 12:24:24.041499 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gh6r5"] Jan 29 12:24:24 crc kubenswrapper[4593]: I0129 12:24:24.703071 4593 generic.go:334] "Generic (PLEG): container finished" podID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerID="86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030" exitCode=0 Jan 29 12:24:24 crc kubenswrapper[4593]: I0129 12:24:24.703139 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerDied","Data":"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030"} Jan 29 12:24:25 crc kubenswrapper[4593]: I0129 12:24:25.085570 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37487459-95b3-4700-85d3-8eae3d218459" path="/var/lib/kubelet/pods/37487459-95b3-4700-85d3-8eae3d218459/volumes" Jan 29 12:24:25 crc kubenswrapper[4593]: I0129 12:24:25.726110 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerStarted","Data":"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124"} Jan 29 12:24:25 crc kubenswrapper[4593]: I0129 12:24:25.751675 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fsx2j" podStartSLOduration=3.186550892 podStartE2EDuration="8.751656429s" podCreationTimestamp="2026-01-29 12:24:17 +0000 UTC" firstStartedPulling="2026-01-29 12:24:19.65197176 +0000 UTC m=+5125.525005951" lastFinishedPulling="2026-01-29 12:24:25.217077297 +0000 UTC m=+5131.090111488" observedRunningTime="2026-01-29 12:24:25.744623348 +0000 UTC m=+5131.617657539" watchObservedRunningTime="2026-01-29 12:24:25.751656429 +0000 UTC m=+5131.624690620" Jan 29 12:24:27 crc kubenswrapper[4593]: I0129 12:24:27.728779 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:27 crc kubenswrapper[4593]: I0129 12:24:27.729243 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:27 crc kubenswrapper[4593]: I0129 12:24:27.779625 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.841728 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:24:28 crc kubenswrapper[4593]: E0129 12:24:28.842423 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="registry-server" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.842439 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="registry-server" Jan 29 12:24:28 crc kubenswrapper[4593]: E0129 12:24:28.842469 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="extract-utilities" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.842476 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="extract-utilities" Jan 29 12:24:28 crc kubenswrapper[4593]: E0129 12:24:28.842495 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="extract-content" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.842502 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="extract-content" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.842749 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="37487459-95b3-4700-85d3-8eae3d218459" containerName="registry-server" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.845763 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.869243 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dw4s4"/"openshift-service-ca.crt" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.869243 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dw4s4"/"kube-root-ca.crt" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.892505 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.954689 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:28 crc kubenswrapper[4593]: I0129 12:24:28.954775 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.056253 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.056341 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.057020 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.116361 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") pod \"must-gather-vjpbp\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.165071 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.770080 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.802065 4593 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c1755998-9149-49be-b10f-c4fe029728bc" containerName="galera" probeResult="failure" output="command timed out" Jan 29 12:24:29 crc kubenswrapper[4593]: I0129 12:24:29.961583 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:24:29 crc kubenswrapper[4593]: W0129 12:24:29.984983 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65f07111_44a8_402c_887e_fb65ab51a2ba.slice/crio-245594f7dafbd456f724f1376fd10ae6d87a34162aa2ea7de6b153cdc54c71cd WatchSource:0}: Error finding container 245594f7dafbd456f724f1376fd10ae6d87a34162aa2ea7de6b153cdc54c71cd: Status 404 returned error can't find the container with id 245594f7dafbd456f724f1376fd10ae6d87a34162aa2ea7de6b153cdc54c71cd Jan 29 12:24:30 crc kubenswrapper[4593]: I0129 12:24:30.802306 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" event={"ID":"65f07111-44a8-402c-887e-fb65ab51a2ba","Type":"ContainerStarted","Data":"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a"} Jan 29 12:24:30 crc kubenswrapper[4593]: I0129 12:24:30.802671 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" event={"ID":"65f07111-44a8-402c-887e-fb65ab51a2ba","Type":"ContainerStarted","Data":"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee"} Jan 29 12:24:30 crc kubenswrapper[4593]: I0129 12:24:30.802686 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" event={"ID":"65f07111-44a8-402c-887e-fb65ab51a2ba","Type":"ContainerStarted","Data":"245594f7dafbd456f724f1376fd10ae6d87a34162aa2ea7de6b153cdc54c71cd"} Jan 29 12:24:30 crc kubenswrapper[4593]: I0129 12:24:30.830586 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" podStartSLOduration=2.830557455 podStartE2EDuration="2.830557455s" podCreationTimestamp="2026-01-29 12:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:24:30.82779531 +0000 UTC m=+5136.700829511" watchObservedRunningTime="2026-01-29 12:24:30.830557455 +0000 UTC m=+5136.703591656" Jan 29 12:24:34 crc kubenswrapper[4593]: I0129 12:24:34.878520 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-mlk67"] Jan 29 12:24:34 crc kubenswrapper[4593]: I0129 12:24:34.880330 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:34 crc kubenswrapper[4593]: I0129 12:24:34.883756 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dw4s4"/"default-dockercfg-gg8rn" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.016117 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.016257 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.118904 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.119122 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.119156 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.155572 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") pod \"crc-debug-mlk67\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.201075 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:24:35 crc kubenswrapper[4593]: W0129 12:24:35.259140 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21818d64_20a5_4483_8f13_919b612d1007.slice/crio-a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0 WatchSource:0}: Error finding container a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0: Status 404 returned error can't find the container with id a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0 Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.849157 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" event={"ID":"21818d64-20a5-4483-8f13-919b612d1007","Type":"ContainerStarted","Data":"b492a7dd406b0c27babd0f943ac62c7e59cd70af84483b5b682c1f16e22a9e9e"} Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.849712 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" event={"ID":"21818d64-20a5-4483-8f13-919b612d1007","Type":"ContainerStarted","Data":"a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0"} Jan 29 12:24:35 crc kubenswrapper[4593]: I0129 12:24:35.874901 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" podStartSLOduration=1.874862275 podStartE2EDuration="1.874862275s" podCreationTimestamp="2026-01-29 12:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:24:35.862795068 +0000 UTC m=+5141.735829259" watchObservedRunningTime="2026-01-29 12:24:35.874862275 +0000 UTC m=+5141.747896456" Jan 29 12:24:37 crc kubenswrapper[4593]: I0129 12:24:37.788875 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:37 crc kubenswrapper[4593]: I0129 12:24:37.879600 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:37 crc kubenswrapper[4593]: I0129 12:24:37.879846 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fsx2j" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="registry-server" containerID="cri-o://4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" gracePeriod=2 Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.540761 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.655623 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") pod \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.655880 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") pod \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.655919 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") pod \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\" (UID: \"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b\") " Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.656604 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities" (OuterVolumeSpecName: "utilities") pod "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" (UID: "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.670378 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2" (OuterVolumeSpecName: "kube-api-access-dhwr2") pod "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" (UID: "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b"). InnerVolumeSpecName "kube-api-access-dhwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.753942 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" (UID: "89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.763563 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.763644 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.763663 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhwr2\" (UniqueName: \"kubernetes.io/projected/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b-kube-api-access-dhwr2\") on node \"crc\" DevicePath \"\"" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883042 4593 generic.go:334] "Generic (PLEG): container finished" podID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerID="4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" exitCode=0 Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883094 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerDied","Data":"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124"} Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883127 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fsx2j" event={"ID":"89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b","Type":"ContainerDied","Data":"209ffbcc1a678a7d65c8310cd83d69a1db8590a0079496bbe454339367ab236f"} Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883152 4593 scope.go:117] "RemoveContainer" containerID="4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.883312 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fsx2j" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.916930 4593 scope.go:117] "RemoveContainer" containerID="86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030" Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.922910 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.933966 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fsx2j"] Jan 29 12:24:38 crc kubenswrapper[4593]: I0129 12:24:38.981394 4593 scope.go:117] "RemoveContainer" containerID="4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.020384 4593 scope.go:117] "RemoveContainer" containerID="4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" Jan 29 12:24:39 crc kubenswrapper[4593]: E0129 12:24:39.022062 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124\": container with ID starting with 4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124 not found: ID does not exist" containerID="4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.022107 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124"} err="failed to get container status \"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124\": rpc error: code = NotFound desc = could not find container \"4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124\": container with ID starting with 4d1e3d0e4ee577844e8f8b5547aa7cf41a4f58c10456741876dcbf00c6529124 not found: ID does not exist" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.022137 4593 scope.go:117] "RemoveContainer" containerID="86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030" Jan 29 12:24:39 crc kubenswrapper[4593]: E0129 12:24:39.033416 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030\": container with ID starting with 86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030 not found: ID does not exist" containerID="86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.033465 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030"} err="failed to get container status \"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030\": rpc error: code = NotFound desc = could not find container \"86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030\": container with ID starting with 86c5848a86e7335e43980646d8799a5669e1e2b3ee0212764f28168ba1b6a030 not found: ID does not exist" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.033496 4593 scope.go:117] "RemoveContainer" containerID="4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff" Jan 29 12:24:39 crc kubenswrapper[4593]: E0129 12:24:39.036121 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff\": container with ID starting with 4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff not found: ID does not exist" containerID="4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.036178 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff"} err="failed to get container status \"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff\": rpc error: code = NotFound desc = could not find container \"4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff\": container with ID starting with 4760d4b4efd54e9f0d81dab92eeb29247ea63508178f867c550999b4c73786ff not found: ID does not exist" Jan 29 12:24:39 crc kubenswrapper[4593]: I0129 12:24:39.086980 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" path="/var/lib/kubelet/pods/89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b/volumes" Jan 29 12:25:24 crc kubenswrapper[4593]: I0129 12:25:24.413269 4593 generic.go:334] "Generic (PLEG): container finished" podID="21818d64-20a5-4483-8f13-919b612d1007" containerID="b492a7dd406b0c27babd0f943ac62c7e59cd70af84483b5b682c1f16e22a9e9e" exitCode=0 Jan 29 12:25:24 crc kubenswrapper[4593]: I0129 12:25:24.413483 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" event={"ID":"21818d64-20a5-4483-8f13-919b612d1007","Type":"ContainerDied","Data":"b492a7dd406b0c27babd0f943ac62c7e59cd70af84483b5b682c1f16e22a9e9e"} Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.524842 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.560792 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-mlk67"] Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.571783 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-mlk67"] Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.647874 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") pod \"21818d64-20a5-4483-8f13-919b612d1007\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.648199 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") pod \"21818d64-20a5-4483-8f13-919b612d1007\" (UID: \"21818d64-20a5-4483-8f13-919b612d1007\") " Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.649401 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host" (OuterVolumeSpecName: "host") pod "21818d64-20a5-4483-8f13-919b612d1007" (UID: "21818d64-20a5-4483-8f13-919b612d1007"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.654852 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w" (OuterVolumeSpecName: "kube-api-access-db66w") pod "21818d64-20a5-4483-8f13-919b612d1007" (UID: "21818d64-20a5-4483-8f13-919b612d1007"). InnerVolumeSpecName "kube-api-access-db66w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.761163 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db66w\" (UniqueName: \"kubernetes.io/projected/21818d64-20a5-4483-8f13-919b612d1007-kube-api-access-db66w\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:25 crc kubenswrapper[4593]: I0129 12:25:25.761209 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/21818d64-20a5-4483-8f13-919b612d1007-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.431156 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06c829d257baf4355f4fb1cf267874c14b344384da541e9ef522804001315b0" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.431252 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-mlk67" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854229 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-n8b5q"] Jan 29 12:25:26 crc kubenswrapper[4593]: E0129 12:25:26.854709 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="extract-utilities" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854734 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="extract-utilities" Jan 29 12:25:26 crc kubenswrapper[4593]: E0129 12:25:26.854743 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="registry-server" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854749 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="registry-server" Jan 29 12:25:26 crc kubenswrapper[4593]: E0129 12:25:26.854766 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="extract-content" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854772 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="extract-content" Jan 29 12:25:26 crc kubenswrapper[4593]: E0129 12:25:26.854792 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21818d64-20a5-4483-8f13-919b612d1007" containerName="container-00" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.854797 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="21818d64-20a5-4483-8f13-919b612d1007" containerName="container-00" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.855014 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="21818d64-20a5-4483-8f13-919b612d1007" containerName="container-00" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.855031 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d2f0ed-8f37-4f9b-a07b-c2ecea1ad18b" containerName="registry-server" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.855893 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.858343 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dw4s4"/"default-dockercfg-gg8rn" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.984123 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:26 crc kubenswrapper[4593]: I0129 12:25:26.984205 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.085748 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.085809 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.086143 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21818d64-20a5-4483-8f13-919b612d1007" path="/var/lib/kubelet/pods/21818d64-20a5-4483-8f13-919b612d1007/volumes" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.086349 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.109396 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") pod \"crc-debug-n8b5q\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.178503 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:27 crc kubenswrapper[4593]: I0129 12:25:27.460550 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" event={"ID":"16cd7214-5ee4-4072-a42a-9a51b9deea30","Type":"ContainerStarted","Data":"c863eb19aa45cab50d257db50f8ac6163ff5b0bbdf2c06af4d6b0e94e85d8801"} Jan 29 12:25:28 crc kubenswrapper[4593]: I0129 12:25:28.470709 4593 generic.go:334] "Generic (PLEG): container finished" podID="16cd7214-5ee4-4072-a42a-9a51b9deea30" containerID="1c377ca355fa720f0d286a362dd30108927c61a24acc46c9847397398d91107e" exitCode=0 Jan 29 12:25:28 crc kubenswrapper[4593]: I0129 12:25:28.470807 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" event={"ID":"16cd7214-5ee4-4072-a42a-9a51b9deea30","Type":"ContainerDied","Data":"1c377ca355fa720f0d286a362dd30108927c61a24acc46c9847397398d91107e"} Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.599044 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.743808 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") pod \"16cd7214-5ee4-4072-a42a-9a51b9deea30\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.744197 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") pod \"16cd7214-5ee4-4072-a42a-9a51b9deea30\" (UID: \"16cd7214-5ee4-4072-a42a-9a51b9deea30\") " Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.743936 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host" (OuterVolumeSpecName: "host") pod "16cd7214-5ee4-4072-a42a-9a51b9deea30" (UID: "16cd7214-5ee4-4072-a42a-9a51b9deea30"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.761319 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9" (OuterVolumeSpecName: "kube-api-access-wfrz9") pod "16cd7214-5ee4-4072-a42a-9a51b9deea30" (UID: "16cd7214-5ee4-4072-a42a-9a51b9deea30"). InnerVolumeSpecName "kube-api-access-wfrz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.849121 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16cd7214-5ee4-4072-a42a-9a51b9deea30-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:29 crc kubenswrapper[4593]: I0129 12:25:29.849378 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfrz9\" (UniqueName: \"kubernetes.io/projected/16cd7214-5ee4-4072-a42a-9a51b9deea30-kube-api-access-wfrz9\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:30 crc kubenswrapper[4593]: I0129 12:25:30.221239 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-n8b5q"] Jan 29 12:25:30 crc kubenswrapper[4593]: I0129 12:25:30.233255 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-n8b5q"] Jan 29 12:25:30 crc kubenswrapper[4593]: I0129 12:25:30.493576 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c863eb19aa45cab50d257db50f8ac6163ff5b0bbdf2c06af4d6b0e94e85d8801" Jan 29 12:25:30 crc kubenswrapper[4593]: I0129 12:25:30.494394 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-n8b5q" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.088017 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16cd7214-5ee4-4072-a42a-9a51b9deea30" path="/var/lib/kubelet/pods/16cd7214-5ee4-4072-a42a-9a51b9deea30/volumes" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.495862 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-cnspd"] Jan 29 12:25:31 crc kubenswrapper[4593]: E0129 12:25:31.496797 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16cd7214-5ee4-4072-a42a-9a51b9deea30" containerName="container-00" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.496825 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="16cd7214-5ee4-4072-a42a-9a51b9deea30" containerName="container-00" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.497231 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="16cd7214-5ee4-4072-a42a-9a51b9deea30" containerName="container-00" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.498729 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.501552 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dw4s4"/"default-dockercfg-gg8rn" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.684382 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.684469 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.786567 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.786692 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.786827 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.810933 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") pod \"crc-debug-cnspd\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: I0129 12:25:31.819761 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:31 crc kubenswrapper[4593]: W0129 12:25:31.877499 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1325199a_5a2b_4b86_90a2_cbac24cc029c.slice/crio-30972f9b26bedad4d62d801f62238e529d0bcd80f5cad19f7b83f0c1b499fdf7 WatchSource:0}: Error finding container 30972f9b26bedad4d62d801f62238e529d0bcd80f5cad19f7b83f0c1b499fdf7: Status 404 returned error can't find the container with id 30972f9b26bedad4d62d801f62238e529d0bcd80f5cad19f7b83f0c1b499fdf7 Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.509769 4593 generic.go:334] "Generic (PLEG): container finished" podID="1325199a-5a2b-4b86-90a2-cbac24cc029c" containerID="29677e210c78aebc6aa79ae1c919cd251d1bef19cd76388c6269f96a8c5b559f" exitCode=0 Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.510113 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" event={"ID":"1325199a-5a2b-4b86-90a2-cbac24cc029c","Type":"ContainerDied","Data":"29677e210c78aebc6aa79ae1c919cd251d1bef19cd76388c6269f96a8c5b559f"} Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.510160 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" event={"ID":"1325199a-5a2b-4b86-90a2-cbac24cc029c","Type":"ContainerStarted","Data":"30972f9b26bedad4d62d801f62238e529d0bcd80f5cad19f7b83f0c1b499fdf7"} Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.557026 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-cnspd"] Jan 29 12:25:32 crc kubenswrapper[4593]: I0129 12:25:32.566040 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dw4s4/crc-debug-cnspd"] Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.667236 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.835850 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") pod \"1325199a-5a2b-4b86-90a2-cbac24cc029c\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.835928 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") pod \"1325199a-5a2b-4b86-90a2-cbac24cc029c\" (UID: \"1325199a-5a2b-4b86-90a2-cbac24cc029c\") " Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.835984 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host" (OuterVolumeSpecName: "host") pod "1325199a-5a2b-4b86-90a2-cbac24cc029c" (UID: "1325199a-5a2b-4b86-90a2-cbac24cc029c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.842828 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m" (OuterVolumeSpecName: "kube-api-access-ms74m") pod "1325199a-5a2b-4b86-90a2-cbac24cc029c" (UID: "1325199a-5a2b-4b86-90a2-cbac24cc029c"). InnerVolumeSpecName "kube-api-access-ms74m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.937829 4593 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1325199a-5a2b-4b86-90a2-cbac24cc029c-host\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:33 crc kubenswrapper[4593]: I0129 12:25:33.938125 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms74m\" (UniqueName: \"kubernetes.io/projected/1325199a-5a2b-4b86-90a2-cbac24cc029c-kube-api-access-ms74m\") on node \"crc\" DevicePath \"\"" Jan 29 12:25:34 crc kubenswrapper[4593]: I0129 12:25:34.536179 4593 scope.go:117] "RemoveContainer" containerID="29677e210c78aebc6aa79ae1c919cd251d1bef19cd76388c6269f96a8c5b559f" Jan 29 12:25:34 crc kubenswrapper[4593]: I0129 12:25:34.536355 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/crc-debug-cnspd" Jan 29 12:25:35 crc kubenswrapper[4593]: I0129 12:25:35.086163 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1325199a-5a2b-4b86-90a2-cbac24cc029c" path="/var/lib/kubelet/pods/1325199a-5a2b-4b86-90a2-cbac24cc029c/volumes" Jan 29 12:26:03 crc kubenswrapper[4593]: I0129 12:26:03.947372 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:26:03 crc kubenswrapper[4593]: I0129 12:26:03.948071 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:26:30 crc kubenswrapper[4593]: I0129 12:26:30.665504 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59844fc4b6-zctck_07d138d8-a5fa-4b77-80e5-924dba8de4c0/barbican-api/0.log" Jan 29 12:26:30 crc kubenswrapper[4593]: I0129 12:26:30.779973 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-59844fc4b6-zctck_07d138d8-a5fa-4b77-80e5-924dba8de4c0/barbican-api-log/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.574119 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cf8bfd486-7dlhx_5f3c398f-928a-4f7e-9e76-6978b8a3673e/barbican-keystone-listener/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.592264 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5947965cdc-wl48v_564d3b50-7cec-4913-bac8-64af532aa32f/barbican-worker/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.639278 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cf8bfd486-7dlhx_5f3c398f-928a-4f7e-9e76-6978b8a3673e/barbican-keystone-listener-log/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.853498 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5947965cdc-wl48v_564d3b50-7cec-4913-bac8-64af532aa32f/barbican-worker-log/0.log" Jan 29 12:26:31 crc kubenswrapper[4593]: I0129 12:26:31.928983 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xm4sz_e4241343-d4f5-4690-972e-55f054a93f30/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.139503 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/ceilometer-central-agent/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.168245 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/proxy-httpd/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.196521 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/ceilometer-notification-agent/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.242019 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8581bb16-8d35-4521-8886-3c71554a3a4d/sg-core/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.472683 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7ea14af-5b7c-44d6-a34c-1a344bfc96ef/cinder-api/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.497466 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7ea14af-5b7c-44d6-a34c-1a344bfc96ef/cinder-api-log/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.747199 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5516e5e9-a6e4-4877-bd34-af4128cc7e33/cinder-scheduler/0.log" Jan 29 12:26:32 crc kubenswrapper[4593]: I0129 12:26:32.784435 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_5516e5e9-a6e4-4877-bd34-af4128cc7e33/probe/0.log" Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.502147 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z8tt5_83fa3cd4-ce6a-44bb-b652-c783504941f9/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.511706 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-27mbg_80d7dd41-691a-4411-97c2-91245d43b8ea/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.711942 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/init/0.log" Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.946421 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:26:33 crc kubenswrapper[4593]: I0129 12:26:33.946808 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.019672 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-g462j_fee0ef55-8edb-456c-9344-98a3b34d3aab/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.054110 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/init/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.210851 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-67cb876dc9-mqmln_07012c75-f2fe-400a-b511-d0cc18a1ca9c/dnsmasq-dns/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.353890 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43872652-3bb2-4a5c-9b13-cb25d625cd01/glance-log/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.410614 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_43872652-3bb2-4a5c-9b13-cb25d625cd01/glance-httpd/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.599026 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c4f0192e-509d-46a4-9a2a-c82106019381/glance-httpd/0.log" Jan 29 12:26:34 crc kubenswrapper[4593]: I0129 12:26:34.671937 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c4f0192e-509d-46a4-9a2a-c82106019381/glance-log/0.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.008881 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon/2.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.046618 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon/1.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.504271 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-x2n68_0418390b-7622-490c-ad95-ec5eac075440/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.507592 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-p4f88_62d982c9-eb7a-4d9d-9cdd-2248c63b06fb/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:35 crc kubenswrapper[4593]: I0129 12:26:35.811574 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5bdffb4784-5zp8q_be4a01cd-2eb7-48e8-8a7e-eb02f8851188/horizon-log/0.log" Jan 29 12:26:36 crc kubenswrapper[4593]: I0129 12:26:36.018306 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29494801-8jgxn_f7d47080-9737-4b86-9e40-a6c6bf7f1709/keystone-cron/0.log" Jan 29 12:26:36 crc kubenswrapper[4593]: I0129 12:26:36.108199 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6d0c0ba2-e8ed-4361-8aff-e71714a1617c/kube-state-metrics/0.log" Jan 29 12:26:36 crc kubenswrapper[4593]: I0129 12:26:36.370984 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f96568f6f-lfzv9_e2e767a2-2e4c-4a41-995f-1f0ca9248d1a/keystone-api/0.log" Jan 29 12:26:36 crc kubenswrapper[4593]: I0129 12:26:36.459256 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jt98j_1f7fe168-4498-4002-9233-d6c2d9f115fb/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:37 crc kubenswrapper[4593]: I0129 12:26:37.106315 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ggvct_4c7cff3f-040a-4499-825c-3cccd015326a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:37 crc kubenswrapper[4593]: I0129 12:26:37.271984 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84867bd7b9-4vrb9_174d0d16-4c6e-403a-bf10-0a69b4e98fb1/neutron-httpd/0.log" Jan 29 12:26:37 crc kubenswrapper[4593]: I0129 12:26:37.642268 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-84867bd7b9-4vrb9_174d0d16-4c6e-403a-bf10-0a69b4e98fb1/neutron-api/0.log" Jan 29 12:26:38 crc kubenswrapper[4593]: I0129 12:26:38.246584 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b50238c6-e2ee-4e0b-a9c9-ded7ee100c6f/nova-cell0-conductor-conductor/0.log" Jan 29 12:26:38 crc kubenswrapper[4593]: I0129 12:26:38.410266 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bee10dce-c68f-47f4-84e0-623f276964d8/nova-cell1-conductor-conductor/0.log" Jan 29 12:26:38 crc kubenswrapper[4593]: I0129 12:26:38.865428 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0b25e9a9-4f12-4b7f-9001-74b6c3feb118/nova-cell1-novncproxy-novncproxy/0.log" Jan 29 12:26:38 crc kubenswrapper[4593]: I0129 12:26:38.881949 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d08c570-1374-4c5a-832e-c973d7a39796/nova-api-log/0.log" Jan 29 12:26:39 crc kubenswrapper[4593]: I0129 12:26:39.124372 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rtfdg_f45f3aca-42e1-4105-b843-f5288550ce8c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:39 crc kubenswrapper[4593]: I0129 12:26:39.300332 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_649faf5c-e6bb-4e3d-8cb5-28c57f100008/nova-metadata-log/0.log" Jan 29 12:26:39 crc kubenswrapper[4593]: I0129 12:26:39.391861 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_0d08c570-1374-4c5a-832e-c973d7a39796/nova-api-api/0.log" Jan 29 12:26:39 crc kubenswrapper[4593]: I0129 12:26:39.783293 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/mysql-bootstrap/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.036405 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/mysql-bootstrap/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.106986 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c1755998-9149-49be-b10f-c4fe029728bc/galera/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.255435 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4eff0b9f-e2c4-4ae0-9b42-585f9141d740/nova-scheduler-scheduler/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.609832 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/mysql-bootstrap/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.877367 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/galera/0.log" Jan 29 12:26:40 crc kubenswrapper[4593]: I0129 12:26:40.926533 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6674f537-f800-4b05-912c-b2671e504c17/mysql-bootstrap/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.073309 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_220bdfcb-98c4-4c78-8d95-ea64edfaf1ab/openstackclient/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.383975 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-cc9qq_df5842a4-132b-4c71-a970-efe4f00a3827/ovn-controller/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.471827 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-g6lk4_9299d646-8191-4da6-a2d1-d5a8c6492e91/openstack-network-exporter/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.506443 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_649faf5c-e6bb-4e3d-8cb5-28c57f100008/nova-metadata-metadata/0.log" Jan 29 12:26:41 crc kubenswrapper[4593]: I0129 12:26:41.789943 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server-init/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.483685 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server-init/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.560558 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovsdb-server/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.663419 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_dc6f5a6c-3bf0-4f78-89f3-1e2683a37958/memcached/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.823956 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5320cc21-470d-450c-afa0-c5926e3243c6/openstack-network-exporter/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.858952 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5320cc21-470d-450c-afa0-c5926e3243c6/ovn-northd/0.log" Jan 29 12:26:42 crc kubenswrapper[4593]: I0129 12:26:42.987384 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fd9a4c00-318d-4bd1-85cb-40971234c3cd/openstack-network-exporter/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.214363 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9/openstack-network-exporter/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.731445 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fd9a4c00-318d-4bd1-85cb-40971234c3cd/ovsdbserver-nb/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.731509 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c9b0d5f3-d9a9-44c9-a01b-76c54b9903b9/ovsdbserver-sb/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.732023 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-x49lj_22811af4-f063-480b-81b2-6c09b6526fea/ovs-vswitchd/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.807848 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ftxjl_80db2d7c-94e6-418b-a0b4-2b4064356e4b/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:43 crc kubenswrapper[4593]: I0129 12:26:43.968975 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869645f564-n6fhc_ae8bb4fd-b1d8-4a6a-ac95-9935c4458747/placement-api/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.006530 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/setup-container/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.235564 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/rabbitmq/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.250404 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-869645f564-n6fhc_ae8bb4fd-b1d8-4a6a-ac95-9935c4458747/placement-log/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.278932 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_66e64ba6-3c75-4430-9f03-0fe9dbb37459/setup-container/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.360756 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/setup-container/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.544950 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/setup-container/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.679642 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-jps44_9a263e61-6654-4030-bd96-c1baa9314111/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.682033 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_63184534-fd04-4ef9-9c56-de6c30745ec4/rabbitmq/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.867162 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-7tzj5_ce80c16f-5109-46b9-9438-4f05a4132175/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.893617 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-rvlvb_c3e4e3e3-1994-40a5-bab8-d84db2f44ddb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:44 crc kubenswrapper[4593]: I0129 12:26:44.957701 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lz46t_b1f286ec-6f85-44c4-94f5-f66bc21c2a64/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.129190 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-cfk97_c22e1d76-6585-46e2-9c31-5c002e021882/ssh-known-hosts-edpm-deployment/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.390377 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58d6d94967-wdzcg_f1bc6621-0892-452c-9f95-54554f8c6e68/proxy-httpd/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.413731 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jbnzf_4d1e7e96-e120-43f1-bff0-ea3d624e621b/swift-ring-rebalance/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.454236 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-58d6d94967-wdzcg_f1bc6621-0892-452c-9f95-54554f8c6e68/proxy-server/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.657621 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-auditor/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.686478 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-reaper/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.718402 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-replicator/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.783307 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-auditor/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.978639 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/account-server/0.log" Jan 29 12:26:45 crc kubenswrapper[4593]: I0129 12:26:45.989777 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-auditor/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.057332 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-server/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.094836 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-updater/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.098960 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/container-replicator/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.240681 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-expirer/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.279884 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-server/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.301590 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-replicator/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.305477 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/object-updater/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.344092 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/rsync/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.498628 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_307ad072-fdfc-4c55-8891-bc041d755b1a/swift-recon-cron/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.583405 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-vw5zz_ee0ea7fe-3ea4-4944-8101-b03f1566882f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.615005 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d5ea9892-a149-4cfe-bb9c-ef636eacd125/tempest-tests-tempest-tests-runner/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.763913 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_be3a2ae9-6f0e-459e-bd91-10a92871767c/test-operator-logs-container/0.log" Jan 29 12:26:46 crc kubenswrapper[4593]: I0129 12:26:46.848171 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vcm9p_0f5fb9be-3781-4b9a-96d8-705593907345/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.945834 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.946479 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.946533 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.947352 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:27:03 crc kubenswrapper[4593]: I0129 12:27:03.947420 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" gracePeriod=600 Jan 29 12:27:04 crc kubenswrapper[4593]: E0129 12:27:04.291491 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:04 crc kubenswrapper[4593]: I0129 12:27:04.413363 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" exitCode=0 Jan 29 12:27:04 crc kubenswrapper[4593]: I0129 12:27:04.413410 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f"} Jan 29 12:27:04 crc kubenswrapper[4593]: I0129 12:27:04.413459 4593 scope.go:117] "RemoveContainer" containerID="0c951b718f5f8a81543c1227b8e681ac1add853c973a503786430be2a5132d27" Jan 29 12:27:04 crc kubenswrapper[4593]: I0129 12:27:04.414244 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:04 crc kubenswrapper[4593]: E0129 12:27:04.414476 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:16 crc kubenswrapper[4593]: I0129 12:27:16.074623 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:16 crc kubenswrapper[4593]: E0129 12:27:16.075437 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:18 crc kubenswrapper[4593]: I0129 12:27:18.466326 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:27:18 crc kubenswrapper[4593]: I0129 12:27:18.746268 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:27:18 crc kubenswrapper[4593]: I0129 12:27:18.781176 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:27:18 crc kubenswrapper[4593]: I0129 12:27:18.787419 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.066096 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/pull/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.072199 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/extract/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.107774 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_91a794a7e3b5d6b5b80af8b7cf4bf1977e975505c1d4ffefc9ca05c759mhhpc_d389d4ca-e0e5-4a15-8ff2-afa4745998fa/util/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.415208 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-7ns7q_c5e6d3a8-d6d9-4445-9708-28b88928333e/manager/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.462061 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-7hmqc_e35e9127-0337-4860-b938-bb477a408f1e/manager/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.612524 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-xw2pz_734187ee-1e17-4cdc-b3bb-cfbd6e424793/manager/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.868919 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-2ml7m_499923d8-4593-4225-bc4c-6166003a0bb3/manager/0.log" Jan 29 12:27:19 crc kubenswrapper[4593]: I0129 12:27:19.919948 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-xqcrc_50471b23-1d0d-4bd9-a66f-a89b3a39a612/manager/0.log" Jan 29 12:27:20 crc kubenswrapper[4593]: I0129 12:27:20.130392 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-98l2v_50a8381e-e59b-4400-9209-c33ef4f99c23/manager/0.log" Jan 29 12:27:20 crc kubenswrapper[4593]: I0129 12:27:20.465681 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-6zkvt_c2cda883-37e6-4c21-b320-4962ffdc98b3/manager/0.log" Jan 29 12:27:20 crc kubenswrapper[4593]: I0129 12:27:20.500411 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-t584q_812ebcfb-766d-4a1b-aaaa-2dd5a96ce047/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.070180 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-xf5fn_cdb96936-cd34-44fd-94b5-5570688fdfe6/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.094056 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-c89cq_0881deda-c42a-48d8-9059-b7eaf66c0f9f/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.385474 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-zx6r8_62efedcb-a194-4692-8e84-a0da7777a400/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.403681 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-qt87l_336c4e93-7d0b-4570-aafc-22e0f812db12/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.745238 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-8kf6p_40ab1792-0354-4c78-ac44-a217fbc610a9/manager/0.log" Jan 29 12:27:21 crc kubenswrapper[4593]: I0129 12:27:21.757849 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-9dbds_ba6fb45a-59ff-42ee-acb0-0ee43d001e1e/manager/0.log" Jan 29 12:27:22 crc kubenswrapper[4593]: I0129 12:27:22.040052 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dccvnb_f6e2fc57-0cce-4f5a-bf3e-63efbfff1073/manager/0.log" Jan 29 12:27:22 crc kubenswrapper[4593]: I0129 12:27:22.236288 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-55ccc59995-d7xm7_c8e623f1-2830-4c78-b17a-6000f32978a3/operator/0.log" Jan 29 12:27:22 crc kubenswrapper[4593]: I0129 12:27:22.626688 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-sbxwt_0661b605-afb6-4341-9703-ea25a3afc19d/registry-server/0.log" Jan 29 12:27:22 crc kubenswrapper[4593]: I0129 12:27:22.993011 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-885pn_9b88fe2c-a82a-4284-961a-8af3818815ec/manager/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.171544 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-kttv8_2c7ec826-43f0-49f3-9d96-4330427e4ed9/manager/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.324712 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6d898fd894-sh94p_960bb326-dc22-43e5-bc4f-05c9ce9e26a9/manager/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.342350 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tfkk2_2f32633b-0490-4885-9543-a140c807c742/operator/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.734671 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-k4b7q_0e86fa54-1e41-4bb9-86c7-a0ea0d919270/manager/0.log" Jan 29 12:27:23 crc kubenswrapper[4593]: I0129 12:27:23.911457 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-z4mp8_ea8d9bb8-bdec-453d-a308-28b962971254/manager/0.log" Jan 29 12:27:24 crc kubenswrapper[4593]: I0129 12:27:24.062798 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ltfr4_b45fb247-850e-40b4-b62e-8551d55efcba/manager/0.log" Jan 29 12:27:24 crc kubenswrapper[4593]: I0129 12:27:24.174112 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-zmssx_0259a320-8da9-48e5-8d73-25b09774d9c1/manager/0.log" Jan 29 12:27:28 crc kubenswrapper[4593]: I0129 12:27:28.075034 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:28 crc kubenswrapper[4593]: E0129 12:27:28.075582 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:41 crc kubenswrapper[4593]: I0129 12:27:41.076007 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:41 crc kubenswrapper[4593]: E0129 12:27:41.080030 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:27:47 crc kubenswrapper[4593]: I0129 12:27:47.983000 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pf5p2_9bce548b-2c64-4ac5-a797-979de4cf7656/control-plane-machine-set-operator/0.log" Jan 29 12:27:48 crc kubenswrapper[4593]: I0129 12:27:48.183146 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vtdww_bb259eac-6aa7-42d9-883b-2af6b63af4b8/machine-api-operator/0.log" Jan 29 12:27:48 crc kubenswrapper[4593]: I0129 12:27:48.238367 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vtdww_bb259eac-6aa7-42d9-883b-2af6b63af4b8/kube-rbac-proxy/0.log" Jan 29 12:27:52 crc kubenswrapper[4593]: I0129 12:27:52.075992 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:27:52 crc kubenswrapper[4593]: E0129 12:27:52.077322 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:01 crc kubenswrapper[4593]: I0129 12:28:01.682276 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-qhfhj_59d387c2-4d0b-4d6c-a0d8-2230657bebd0/cert-manager-controller/0.log" Jan 29 12:28:02 crc kubenswrapper[4593]: I0129 12:28:02.246025 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lw7j7_79aa2cc5-a031-412d-a4c7-ba9251f84ed6/cert-manager-cainjector/0.log" Jan 29 12:28:02 crc kubenswrapper[4593]: I0129 12:28:02.426465 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-t7s4r_e2b5756a-c46e-4e76-90bf-0a5c7c1dc759/cert-manager-webhook/0.log" Jan 29 12:28:05 crc kubenswrapper[4593]: I0129 12:28:05.090131 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:28:05 crc kubenswrapper[4593]: E0129 12:28:05.090952 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:15 crc kubenswrapper[4593]: I0129 12:28:15.823443 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-nck62_2ad95dc2-d55a-4dc1-a30e-9c2186ea5cb2/nmstate-console-plugin/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.034604 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q2lbc_ea391d24-e32c-440b-b5c2-218920192125/nmstate-handler/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.277254 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-q2995_7a32568f-244c-432b-8186-683e8bc10371/kube-rbac-proxy/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.298965 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-q2995_7a32568f-244c-432b-8186-683e8bc10371/nmstate-metrics/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.432187 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-xmhmc_b2e0c4ff-8a2b-474d-8414-a0026d61b11e/nmstate-operator/0.log" Jan 29 12:28:16 crc kubenswrapper[4593]: I0129 12:28:16.513449 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-47n46_72d4f068-dc20-44d0-aca6-c8f0992536e6/nmstate-webhook/0.log" Jan 29 12:28:19 crc kubenswrapper[4593]: I0129 12:28:19.079375 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:28:19 crc kubenswrapper[4593]: E0129 12:28:19.079992 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:34 crc kubenswrapper[4593]: I0129 12:28:34.075493 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:28:34 crc kubenswrapper[4593]: E0129 12:28:34.076350 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.352099 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:28:35 crc kubenswrapper[4593]: E0129 12:28:35.352716 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1325199a-5a2b-4b86-90a2-cbac24cc029c" containerName="container-00" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.352733 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="1325199a-5a2b-4b86-90a2-cbac24cc029c" containerName="container-00" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.353013 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="1325199a-5a2b-4b86-90a2-cbac24cc029c" containerName="container-00" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.361045 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.426178 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.441063 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.441212 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.441270 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.542615 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.543012 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.543266 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.546033 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.546552 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.564034 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") pod \"redhat-operators-t8n82\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:35 crc kubenswrapper[4593]: I0129 12:28:35.718691 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:36 crc kubenswrapper[4593]: I0129 12:28:36.239906 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:28:36 crc kubenswrapper[4593]: I0129 12:28:36.311035 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerStarted","Data":"f009cbeccec362360001e7cb5c502e81a1edd3147f1f8aade495c66564bbfd8c"} Jan 29 12:28:37 crc kubenswrapper[4593]: I0129 12:28:37.332859 4593 generic.go:334] "Generic (PLEG): container finished" podID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerID="326801ee869f568d18145038bfc3feeb923901fc80f9ebe2dd1bfa5dfa227fba" exitCode=0 Jan 29 12:28:37 crc kubenswrapper[4593]: I0129 12:28:37.333177 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"326801ee869f568d18145038bfc3feeb923901fc80f9ebe2dd1bfa5dfa227fba"} Jan 29 12:28:39 crc kubenswrapper[4593]: I0129 12:28:39.354436 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerStarted","Data":"cd45d0278c2bcae6a565207daa122f821a4b42623055e66d7bdb3205bf89dcd8"} Jan 29 12:28:48 crc kubenswrapper[4593]: I0129 12:28:48.332965 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hvqbg_3462ad7c-24f3-4c73-990d-a0f471d08d1d/kube-rbac-proxy/0.log" Jan 29 12:28:48 crc kubenswrapper[4593]: I0129 12:28:48.365485 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hvqbg_3462ad7c-24f3-4c73-990d-a0f471d08d1d/controller/0.log" Jan 29 12:28:48 crc kubenswrapper[4593]: I0129 12:28:48.930137 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.075548 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:28:49 crc kubenswrapper[4593]: E0129 12:28:49.075864 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.107738 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.165568 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.190507 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.207398 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.375088 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.444140 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.479675 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.483774 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.676600 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-frr-files/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.712121 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-metrics/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.722379 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/cp-reloader/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.723842 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/controller/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.960039 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/kube-rbac-proxy-frr/0.log" Jan 29 12:28:49 crc kubenswrapper[4593]: I0129 12:28:49.990593 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/kube-rbac-proxy/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.027379 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/frr-metrics/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.298941 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dj42h_45d808cf-80c4-4f7b-a172-76e4ecd9e37b/frr-k8s-webhook-server/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.399954 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/reloader/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.731622 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5bf4d9f4bd-ll9bk_421156e9-d8d3-4112-bd58-d09c40a70a12/manager/0.log" Jan 29 12:28:50 crc kubenswrapper[4593]: I0129 12:28:50.837248 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7fdc78c47c-w2tv4_c3381187-83f6-4877-8d72-3ed30f360a86/webhook-server/0.log" Jan 29 12:28:51 crc kubenswrapper[4593]: I0129 12:28:51.157039 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m77zw_37969e5d-3111-45cc-a711-da443a473c52/kube-rbac-proxy/0.log" Jan 29 12:28:51 crc kubenswrapper[4593]: I0129 12:28:51.659375 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m77zw_37969e5d-3111-45cc-a711-da443a473c52/speaker/0.log" Jan 29 12:28:51 crc kubenswrapper[4593]: I0129 12:28:51.760748 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-54s6j_9eb36e6e-e554-4b1a-9750-cd81c4c8d985/frr/0.log" Jan 29 12:28:52 crc kubenswrapper[4593]: I0129 12:28:52.468520 4593 generic.go:334] "Generic (PLEG): container finished" podID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerID="cd45d0278c2bcae6a565207daa122f821a4b42623055e66d7bdb3205bf89dcd8" exitCode=0 Jan 29 12:28:52 crc kubenswrapper[4593]: I0129 12:28:52.468568 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"cd45d0278c2bcae6a565207daa122f821a4b42623055e66d7bdb3205bf89dcd8"} Jan 29 12:28:54 crc kubenswrapper[4593]: I0129 12:28:54.487759 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerStarted","Data":"00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278"} Jan 29 12:28:54 crc kubenswrapper[4593]: I0129 12:28:54.514744 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t8n82" podStartSLOduration=3.120361058 podStartE2EDuration="19.51470227s" podCreationTimestamp="2026-01-29 12:28:35 +0000 UTC" firstStartedPulling="2026-01-29 12:28:37.33635985 +0000 UTC m=+5383.209394041" lastFinishedPulling="2026-01-29 12:28:53.730701052 +0000 UTC m=+5399.603735253" observedRunningTime="2026-01-29 12:28:54.512564942 +0000 UTC m=+5400.385599143" watchObservedRunningTime="2026-01-29 12:28:54.51470227 +0000 UTC m=+5400.387736471" Jan 29 12:28:55 crc kubenswrapper[4593]: I0129 12:28:55.720389 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:55 crc kubenswrapper[4593]: I0129 12:28:55.720451 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:28:56 crc kubenswrapper[4593]: I0129 12:28:56.775546 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:28:56 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:28:56 crc kubenswrapper[4593]: > Jan 29 12:29:03 crc kubenswrapper[4593]: I0129 12:29:03.075845 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:29:03 crc kubenswrapper[4593]: E0129 12:29:03.077921 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:29:06 crc kubenswrapper[4593]: I0129 12:29:06.691312 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:29:06 crc kubenswrapper[4593]: I0129 12:29:06.767130 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:06 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:06 crc kubenswrapper[4593]: > Jan 29 12:29:06 crc kubenswrapper[4593]: I0129 12:29:06.981621 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:29:06 crc kubenswrapper[4593]: I0129 12:29:06.988970 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.040420 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.212558 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/util/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.291075 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/extract/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.293038 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcdbvcz_ae0a4079-4142-4fd5-bf8e-bf2adfa5ad11/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.454848 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.681710 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.705297 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:29:07 crc kubenswrapper[4593]: I0129 12:29:07.772314 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.024593 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/util/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.025246 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/pull/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.074522 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713f5p8w_b514f100-8029-41bf-9315-9e8c18a7238a/extract/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.254597 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.490869 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.600256 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.662077 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.825087 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-utilities/0.log" Jan 29 12:29:08 crc kubenswrapper[4593]: I0129 12:29:08.893559 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/extract-content/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.237942 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.416915 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.430625 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.502515 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kt56h_f0d1455d-ba27-48f0-be57-3d8e91a63853/registry-server/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.506579 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.764151 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-content/0.log" Jan 29 12:29:09 crc kubenswrapper[4593]: I0129 12:29:09.794243 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/extract-utilities/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.199012 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s2rlp_7a59fe58-c900-46ea-8ff2-8a7f49210dc3/marketplace-operator/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.345980 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.472149 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.542145 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.549099 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-57v5l_3ae70d27-10ec-4015-851d-d84aaf99d782/registry-server/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.641563 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.848076 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-content/0.log" Jan 29 12:29:10 crc kubenswrapper[4593]: I0129 12:29:10.892219 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/extract-utilities/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.051036 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v2f96_69a313ce-b443-4080-9eea-bde0c61dc59d/registry-server/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.159261 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-utilities/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.732088 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-utilities/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.900157 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-content/0.log" Jan 29 12:29:11 crc kubenswrapper[4593]: I0129 12:29:11.960462 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-content/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.188736 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-content/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.229059 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/extract-utilities/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.257811 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t8n82_5b4febee-8f26-4e76-a4b6-09da10523b68/registry-server/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.383079 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.536140 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.572286 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.603590 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.844924 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-utilities/0.log" Jan 29 12:29:12 crc kubenswrapper[4593]: I0129 12:29:12.884611 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/extract-content/0.log" Jan 29 12:29:13 crc kubenswrapper[4593]: I0129 12:29:13.484583 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vbjtl_954251cb-5bea-456e-8d36-27eda2fe92d6/registry-server/0.log" Jan 29 12:29:16 crc kubenswrapper[4593]: I0129 12:29:16.767195 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:16 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:16 crc kubenswrapper[4593]: > Jan 29 12:29:18 crc kubenswrapper[4593]: I0129 12:29:18.075219 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:29:18 crc kubenswrapper[4593]: E0129 12:29:18.076430 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:29:26 crc kubenswrapper[4593]: I0129 12:29:26.772844 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:26 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:26 crc kubenswrapper[4593]: > Jan 29 12:29:33 crc kubenswrapper[4593]: I0129 12:29:33.074646 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:29:33 crc kubenswrapper[4593]: E0129 12:29:33.075276 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:29:36 crc kubenswrapper[4593]: I0129 12:29:36.788343 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:36 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:36 crc kubenswrapper[4593]: > Jan 29 12:29:46 crc kubenswrapper[4593]: I0129 12:29:46.777795 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:46 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:46 crc kubenswrapper[4593]: > Jan 29 12:29:48 crc kubenswrapper[4593]: I0129 12:29:48.075779 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:29:48 crc kubenswrapper[4593]: E0129 12:29:48.076124 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:29:56 crc kubenswrapper[4593]: I0129 12:29:56.779426 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:29:56 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:29:56 crc kubenswrapper[4593]: > Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.177906 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h"] Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.179896 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.185320 4593 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.185611 4593 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.194570 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h"] Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.245428 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.245594 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.245693 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.347468 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.347601 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.347650 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.348521 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.356775 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.369160 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") pod \"collect-profiles-29494830-v265h\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:00 crc kubenswrapper[4593]: I0129 12:30:00.508487 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:01 crc kubenswrapper[4593]: I0129 12:30:01.000532 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h"] Jan 29 12:30:01 crc kubenswrapper[4593]: I0129 12:30:01.199567 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" event={"ID":"04c1b6ee-aa78-4334-b212-4e15c4aceda7","Type":"ContainerStarted","Data":"ec4125f9487aabe08bbe0d53076ff552deb919191e3b90e2b41387a971ad58b7"} Jan 29 12:30:02 crc kubenswrapper[4593]: I0129 12:30:02.075614 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:02 crc kubenswrapper[4593]: E0129 12:30:02.076182 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:30:02 crc kubenswrapper[4593]: I0129 12:30:02.210136 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" event={"ID":"04c1b6ee-aa78-4334-b212-4e15c4aceda7","Type":"ContainerStarted","Data":"4aaea735207498aaa0a35ad4ef072f20cf4b60e5b44ae473861a8ce70920dc7d"} Jan 29 12:30:02 crc kubenswrapper[4593]: I0129 12:30:02.242160 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" podStartSLOduration=2.24213114 podStartE2EDuration="2.24213114s" podCreationTimestamp="2026-01-29 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:30:02.231160385 +0000 UTC m=+5468.104194586" watchObservedRunningTime="2026-01-29 12:30:02.24213114 +0000 UTC m=+5468.115165321" Jan 29 12:30:03 crc kubenswrapper[4593]: I0129 12:30:03.221184 4593 generic.go:334] "Generic (PLEG): container finished" podID="04c1b6ee-aa78-4334-b212-4e15c4aceda7" containerID="4aaea735207498aaa0a35ad4ef072f20cf4b60e5b44ae473861a8ce70920dc7d" exitCode=0 Jan 29 12:30:03 crc kubenswrapper[4593]: I0129 12:30:03.221220 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" event={"ID":"04c1b6ee-aa78-4334-b212-4e15c4aceda7","Type":"ContainerDied","Data":"4aaea735207498aaa0a35ad4ef072f20cf4b60e5b44ae473861a8ce70920dc7d"} Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.611448 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.749287 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") pod \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.749389 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") pod \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.749531 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") pod \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\" (UID: \"04c1b6ee-aa78-4334-b212-4e15c4aceda7\") " Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.750085 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume" (OuterVolumeSpecName: "config-volume") pod "04c1b6ee-aa78-4334-b212-4e15c4aceda7" (UID: "04c1b6ee-aa78-4334-b212-4e15c4aceda7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.750302 4593 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04c1b6ee-aa78-4334-b212-4e15c4aceda7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.755373 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04c1b6ee-aa78-4334-b212-4e15c4aceda7" (UID: "04c1b6ee-aa78-4334-b212-4e15c4aceda7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.756450 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9" (OuterVolumeSpecName: "kube-api-access-x5gt9") pod "04c1b6ee-aa78-4334-b212-4e15c4aceda7" (UID: "04c1b6ee-aa78-4334-b212-4e15c4aceda7"). InnerVolumeSpecName "kube-api-access-x5gt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.851508 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5gt9\" (UniqueName: \"kubernetes.io/projected/04c1b6ee-aa78-4334-b212-4e15c4aceda7-kube-api-access-x5gt9\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:04 crc kubenswrapper[4593]: I0129 12:30:04.851549 4593 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04c1b6ee-aa78-4334-b212-4e15c4aceda7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.285254 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" event={"ID":"04c1b6ee-aa78-4334-b212-4e15c4aceda7","Type":"ContainerDied","Data":"ec4125f9487aabe08bbe0d53076ff552deb919191e3b90e2b41387a971ad58b7"} Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.285319 4593 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec4125f9487aabe08bbe0d53076ff552deb919191e3b90e2b41387a971ad58b7" Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.285372 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-v265h" Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.341897 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 12:30:05 crc kubenswrapper[4593]: I0129 12:30:05.351385 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494785-5jqfl"] Jan 29 12:30:06 crc kubenswrapper[4593]: I0129 12:30:06.800559 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:30:06 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:30:06 crc kubenswrapper[4593]: > Jan 29 12:30:07 crc kubenswrapper[4593]: I0129 12:30:07.086248 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4e2861-f7e0-40bb-bb77-b0fdd3498554" path="/var/lib/kubelet/pods/dc4e2861-f7e0-40bb-bb77-b0fdd3498554/volumes" Jan 29 12:30:16 crc kubenswrapper[4593]: I0129 12:30:16.777821 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:30:16 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:30:16 crc kubenswrapper[4593]: > Jan 29 12:30:17 crc kubenswrapper[4593]: I0129 12:30:17.075510 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:17 crc kubenswrapper[4593]: E0129 12:30:17.076179 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:30:17 crc kubenswrapper[4593]: I0129 12:30:17.680747 4593 scope.go:117] "RemoveContainer" containerID="774b5de0fbc462ffcb1b94ee57144a8198c30add9d0ae3a9eee99f2a26a14b82" Jan 29 12:30:26 crc kubenswrapper[4593]: I0129 12:30:26.790761 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:30:26 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:30:26 crc kubenswrapper[4593]: > Jan 29 12:30:26 crc kubenswrapper[4593]: I0129 12:30:26.791286 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:26 crc kubenswrapper[4593]: I0129 12:30:26.792050 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278"} pod="openshift-marketplace/redhat-operators-t8n82" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 29 12:30:26 crc kubenswrapper[4593]: I0129 12:30:26.792089 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" containerID="cri-o://00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278" gracePeriod=30 Jan 29 12:30:28 crc kubenswrapper[4593]: I0129 12:30:28.088157 4593 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:30:28 crc kubenswrapper[4593]: I0129 12:30:28.507034 4593 generic.go:334] "Generic (PLEG): container finished" podID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerID="00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278" exitCode=0 Jan 29 12:30:28 crc kubenswrapper[4593]: I0129 12:30:28.507085 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278"} Jan 29 12:30:29 crc kubenswrapper[4593]: I0129 12:30:29.074889 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:29 crc kubenswrapper[4593]: E0129 12:30:29.075591 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:30:29 crc kubenswrapper[4593]: I0129 12:30:29.520705 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerStarted","Data":"842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908"} Jan 29 12:30:35 crc kubenswrapper[4593]: I0129 12:30:35.720215 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:35 crc kubenswrapper[4593]: I0129 12:30:35.722302 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:36 crc kubenswrapper[4593]: I0129 12:30:36.778666 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" probeResult="failure" output=< Jan 29 12:30:36 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:30:36 crc kubenswrapper[4593]: > Jan 29 12:30:42 crc kubenswrapper[4593]: I0129 12:30:42.076513 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:42 crc kubenswrapper[4593]: E0129 12:30:42.077175 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:30:45 crc kubenswrapper[4593]: I0129 12:30:45.790114 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:45 crc kubenswrapper[4593]: I0129 12:30:45.847244 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:46 crc kubenswrapper[4593]: I0129 12:30:46.034726 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:30:47 crc kubenswrapper[4593]: I0129 12:30:47.699211 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t8n82" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" containerID="cri-o://842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908" gracePeriod=2 Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:48.753334 4593 generic.go:334] "Generic (PLEG): container finished" podID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerID="842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908" exitCode=0 Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:48.753725 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908"} Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:48.753767 4593 scope.go:117] "RemoveContainer" containerID="00ff006605dd6dc0baa5b63261f7a4ff3fef69362f56ba2ac014140ec83c7278" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:48.928408 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.042292 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") pod \"5b4febee-8f26-4e76-a4b6-09da10523b68\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.042475 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") pod \"5b4febee-8f26-4e76-a4b6-09da10523b68\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.042520 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") pod \"5b4febee-8f26-4e76-a4b6-09da10523b68\" (UID: \"5b4febee-8f26-4e76-a4b6-09da10523b68\") " Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.051401 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities" (OuterVolumeSpecName: "utilities") pod "5b4febee-8f26-4e76-a4b6-09da10523b68" (UID: "5b4febee-8f26-4e76-a4b6-09da10523b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.051989 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc" (OuterVolumeSpecName: "kube-api-access-lm2nc") pod "5b4febee-8f26-4e76-a4b6-09da10523b68" (UID: "5b4febee-8f26-4e76-a4b6-09da10523b68"). InnerVolumeSpecName "kube-api-access-lm2nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.145646 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm2nc\" (UniqueName: \"kubernetes.io/projected/5b4febee-8f26-4e76-a4b6-09da10523b68-kube-api-access-lm2nc\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.145671 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.194389 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b4febee-8f26-4e76-a4b6-09da10523b68" (UID: "5b4febee-8f26-4e76-a4b6-09da10523b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.247391 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b4febee-8f26-4e76-a4b6-09da10523b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.774032 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t8n82" event={"ID":"5b4febee-8f26-4e76-a4b6-09da10523b68","Type":"ContainerDied","Data":"f009cbeccec362360001e7cb5c502e81a1edd3147f1f8aade495c66564bbfd8c"} Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.774085 4593 scope.go:117] "RemoveContainer" containerID="842e35762b01a487fdff904d5d4a2263642ba451df60d98e32104c2eb4869908" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.774099 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t8n82" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.796558 4593 scope.go:117] "RemoveContainer" containerID="cd45d0278c2bcae6a565207daa122f821a4b42623055e66d7bdb3205bf89dcd8" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.841004 4593 scope.go:117] "RemoveContainer" containerID="326801ee869f568d18145038bfc3feeb923901fc80f9ebe2dd1bfa5dfa227fba" Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.841153 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:30:50 crc kubenswrapper[4593]: I0129 12:30:49.850977 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t8n82"] Jan 29 12:30:51 crc kubenswrapper[4593]: I0129 12:30:51.085832 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" path="/var/lib/kubelet/pods/5b4febee-8f26-4e76-a4b6-09da10523b68/volumes" Jan 29 12:30:55 crc kubenswrapper[4593]: I0129 12:30:55.082981 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:30:55 crc kubenswrapper[4593]: E0129 12:30:55.084019 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:09 crc kubenswrapper[4593]: I0129 12:31:09.074977 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:09 crc kubenswrapper[4593]: E0129 12:31:09.075731 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:17 crc kubenswrapper[4593]: I0129 12:31:17.765346 4593 scope.go:117] "RemoveContainer" containerID="b492a7dd406b0c27babd0f943ac62c7e59cd70af84483b5b682c1f16e22a9e9e" Jan 29 12:31:20 crc kubenswrapper[4593]: I0129 12:31:20.075560 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:20 crc kubenswrapper[4593]: E0129 12:31:20.076221 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:33 crc kubenswrapper[4593]: I0129 12:31:33.080555 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:33 crc kubenswrapper[4593]: E0129 12:31:33.081442 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:47 crc kubenswrapper[4593]: I0129 12:31:47.083987 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:47 crc kubenswrapper[4593]: E0129 12:31:47.085089 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.078881 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.079869 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04c1b6ee-aa78-4334-b212-4e15c4aceda7" containerName="collect-profiles" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.079902 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="04c1b6ee-aa78-4334-b212-4e15c4aceda7" containerName="collect-profiles" Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.079950 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.079958 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.079970 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="extract-utilities" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.079978 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="extract-utilities" Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.079997 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="extract-content" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080004 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="extract-content" Jan 29 12:31:50 crc kubenswrapper[4593]: E0129 12:31:50.080024 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080032 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080306 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="04c1b6ee-aa78-4334-b212-4e15c4aceda7" containerName="collect-profiles" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080618 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.080651 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b4febee-8f26-4e76-a4b6-09da10523b68" containerName="registry-server" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.082511 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.094509 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.206461 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.206597 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.206663 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.308867 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.309103 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.309184 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.309452 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.309597 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.332922 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") pod \"certified-operators-5l2gk\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.421169 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:31:50 crc kubenswrapper[4593]: I0129 12:31:50.727057 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:31:51 crc kubenswrapper[4593]: I0129 12:31:51.331017 4593 generic.go:334] "Generic (PLEG): container finished" podID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerID="b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb" exitCode=0 Jan 29 12:31:51 crc kubenswrapper[4593]: I0129 12:31:51.331138 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerDied","Data":"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb"} Jan 29 12:31:51 crc kubenswrapper[4593]: I0129 12:31:51.331335 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerStarted","Data":"bf4434e5b035dba180315d3cb2ea4eca8d32e33cde7fe6fc465316c9c9d37f6c"} Jan 29 12:31:53 crc kubenswrapper[4593]: I0129 12:31:53.382071 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerStarted","Data":"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651"} Jan 29 12:31:57 crc kubenswrapper[4593]: I0129 12:31:57.437557 4593 generic.go:334] "Generic (PLEG): container finished" podID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerID="81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651" exitCode=0 Jan 29 12:31:57 crc kubenswrapper[4593]: I0129 12:31:57.437669 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerDied","Data":"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651"} Jan 29 12:31:58 crc kubenswrapper[4593]: I0129 12:31:58.076919 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:31:58 crc kubenswrapper[4593]: E0129 12:31:58.077157 4593 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p4zf2_openshift-machine-config-operator(5eed1f11-8e73-4894-965f-a670f6c877b3)\"" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" Jan 29 12:31:59 crc kubenswrapper[4593]: I0129 12:31:59.461718 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerStarted","Data":"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25"} Jan 29 12:31:59 crc kubenswrapper[4593]: I0129 12:31:59.490942 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5l2gk" podStartSLOduration=2.166959243 podStartE2EDuration="9.490895669s" podCreationTimestamp="2026-01-29 12:31:50 +0000 UTC" firstStartedPulling="2026-01-29 12:31:51.332436722 +0000 UTC m=+5577.205470913" lastFinishedPulling="2026-01-29 12:31:58.656373148 +0000 UTC m=+5584.529407339" observedRunningTime="2026-01-29 12:31:59.484341541 +0000 UTC m=+5585.357375742" watchObservedRunningTime="2026-01-29 12:31:59.490895669 +0000 UTC m=+5585.363929870" Jan 29 12:32:00 crc kubenswrapper[4593]: I0129 12:32:00.421512 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:00 crc kubenswrapper[4593]: I0129 12:32:00.421675 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:01 crc kubenswrapper[4593]: I0129 12:32:01.467584 4593 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5l2gk" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" probeResult="failure" output=< Jan 29 12:32:01 crc kubenswrapper[4593]: timeout: failed to connect service ":50051" within 1s Jan 29 12:32:01 crc kubenswrapper[4593]: > Jan 29 12:32:03 crc kubenswrapper[4593]: I0129 12:32:03.503060 4593 generic.go:334] "Generic (PLEG): container finished" podID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" exitCode=0 Jan 29 12:32:03 crc kubenswrapper[4593]: I0129 12:32:03.503692 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" event={"ID":"65f07111-44a8-402c-887e-fb65ab51a2ba","Type":"ContainerDied","Data":"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee"} Jan 29 12:32:03 crc kubenswrapper[4593]: I0129 12:32:03.504413 4593 scope.go:117] "RemoveContainer" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" Jan 29 12:32:03 crc kubenswrapper[4593]: I0129 12:32:03.756347 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dw4s4_must-gather-vjpbp_65f07111-44a8-402c-887e-fb65ab51a2ba/gather/0.log" Jan 29 12:32:10 crc kubenswrapper[4593]: I0129 12:32:10.478578 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:10 crc kubenswrapper[4593]: I0129 12:32:10.529602 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:10 crc kubenswrapper[4593]: I0129 12:32:10.742997 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:32:11 crc kubenswrapper[4593]: I0129 12:32:11.578319 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5l2gk" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" containerID="cri-o://20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" gracePeriod=2 Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.054750 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.074939 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.147380 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") pod \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.147896 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") pod \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.148134 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") pod \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\" (UID: \"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df\") " Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.149086 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities" (OuterVolumeSpecName: "utilities") pod "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" (UID: "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.150835 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.155871 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt" (OuterVolumeSpecName: "kube-api-access-6zjwt") pod "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" (UID: "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df"). InnerVolumeSpecName "kube-api-access-6zjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.212316 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" (UID: "5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.252735 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.252773 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zjwt\" (UniqueName: \"kubernetes.io/projected/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df-kube-api-access-6zjwt\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.593034 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f"} Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.603496 4593 generic.go:334] "Generic (PLEG): container finished" podID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerID="20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" exitCode=0 Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.603555 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerDied","Data":"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25"} Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.603594 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5l2gk" event={"ID":"5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df","Type":"ContainerDied","Data":"bf4434e5b035dba180315d3cb2ea4eca8d32e33cde7fe6fc465316c9c9d37f6c"} Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.603616 4593 scope.go:117] "RemoveContainer" containerID="20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.604033 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5l2gk" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.667749 4593 scope.go:117] "RemoveContainer" containerID="81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.682735 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.701506 4593 scope.go:117] "RemoveContainer" containerID="b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.733044 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5l2gk"] Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.762280 4593 scope.go:117] "RemoveContainer" containerID="20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" Jan 29 12:32:12 crc kubenswrapper[4593]: E0129 12:32:12.763029 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25\": container with ID starting with 20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25 not found: ID does not exist" containerID="20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763070 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25"} err="failed to get container status \"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25\": rpc error: code = NotFound desc = could not find container \"20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25\": container with ID starting with 20a5a1bd0651aa7ac36b9a7d8d87d0220769b3d4033f80422ddb9f134b6a4d25 not found: ID does not exist" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763098 4593 scope.go:117] "RemoveContainer" containerID="81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651" Jan 29 12:32:12 crc kubenswrapper[4593]: E0129 12:32:12.763390 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651\": container with ID starting with 81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651 not found: ID does not exist" containerID="81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763412 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651"} err="failed to get container status \"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651\": rpc error: code = NotFound desc = could not find container \"81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651\": container with ID starting with 81223e57951a2e3b93d80c9f2820055849f57c26f562e22e5abeba878ada1651 not found: ID does not exist" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763426 4593 scope.go:117] "RemoveContainer" containerID="b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb" Jan 29 12:32:12 crc kubenswrapper[4593]: E0129 12:32:12.763960 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb\": container with ID starting with b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb not found: ID does not exist" containerID="b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb" Jan 29 12:32:12 crc kubenswrapper[4593]: I0129 12:32:12.763983 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb"} err="failed to get container status \"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb\": rpc error: code = NotFound desc = could not find container \"b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb\": container with ID starting with b3276f0e5aa2ffa94751a44f64dd12fe7ecb48344985fe6e93e729e1ba9090bb not found: ID does not exist" Jan 29 12:32:13 crc kubenswrapper[4593]: I0129 12:32:13.087881 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" path="/var/lib/kubelet/pods/5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df/volumes" Jan 29 12:32:17 crc kubenswrapper[4593]: I0129 12:32:17.862983 4593 scope.go:117] "RemoveContainer" containerID="1c377ca355fa720f0d286a362dd30108927c61a24acc46c9847397398d91107e" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.156583 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.156962 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="copy" containerID="cri-o://1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" gracePeriod=2 Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.165156 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dw4s4/must-gather-vjpbp"] Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.591256 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dw4s4_must-gather-vjpbp_65f07111-44a8-402c-887e-fb65ab51a2ba/copy/0.log" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.592375 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.660131 4593 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dw4s4_must-gather-vjpbp_65f07111-44a8-402c-887e-fb65ab51a2ba/copy/0.log" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.660666 4593 generic.go:334] "Generic (PLEG): container finished" podID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerID="1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" exitCode=143 Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.660769 4593 scope.go:117] "RemoveContainer" containerID="1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.660776 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dw4s4/must-gather-vjpbp" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.678761 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") pod \"65f07111-44a8-402c-887e-fb65ab51a2ba\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.679148 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") pod \"65f07111-44a8-402c-887e-fb65ab51a2ba\" (UID: \"65f07111-44a8-402c-887e-fb65ab51a2ba\") " Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.691168 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw" (OuterVolumeSpecName: "kube-api-access-mslvw") pod "65f07111-44a8-402c-887e-fb65ab51a2ba" (UID: "65f07111-44a8-402c-887e-fb65ab51a2ba"). InnerVolumeSpecName "kube-api-access-mslvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.701689 4593 scope.go:117] "RemoveContainer" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.769430 4593 scope.go:117] "RemoveContainer" containerID="1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" Jan 29 12:32:18 crc kubenswrapper[4593]: E0129 12:32:18.771648 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a\": container with ID starting with 1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a not found: ID does not exist" containerID="1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.771686 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a"} err="failed to get container status \"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a\": rpc error: code = NotFound desc = could not find container \"1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a\": container with ID starting with 1feec9852be62edc7f198220f764a5c74cb5410083acfe510ab8aa789824da8a not found: ID does not exist" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.771708 4593 scope.go:117] "RemoveContainer" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" Jan 29 12:32:18 crc kubenswrapper[4593]: E0129 12:32:18.772161 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee\": container with ID starting with de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee not found: ID does not exist" containerID="de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.772295 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee"} err="failed to get container status \"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee\": rpc error: code = NotFound desc = could not find container \"de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee\": container with ID starting with de71b4032d10072bd82e38895c6203cec0fc48ffa350c02731e705e0242d4fee not found: ID does not exist" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.781697 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mslvw\" (UniqueName: \"kubernetes.io/projected/65f07111-44a8-402c-887e-fb65ab51a2ba-kube-api-access-mslvw\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.920254 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "65f07111-44a8-402c-887e-fb65ab51a2ba" (UID: "65f07111-44a8-402c-887e-fb65ab51a2ba"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:18 crc kubenswrapper[4593]: I0129 12:32:18.985965 4593 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/65f07111-44a8-402c-887e-fb65ab51a2ba-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:19 crc kubenswrapper[4593]: I0129 12:32:19.087271 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" path="/var/lib/kubelet/pods/65f07111-44a8-402c-887e-fb65ab51a2ba/volumes" Jan 29 12:34:33 crc kubenswrapper[4593]: I0129 12:34:33.945691 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:34:33 crc kubenswrapper[4593]: I0129 12:34:33.946309 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.931915 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932828 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="extract-content" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932841 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="extract-content" Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932855 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="gather" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932862 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="gather" Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932873 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932883 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932897 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="extract-utilities" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932907 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="extract-utilities" Jan 29 12:34:46 crc kubenswrapper[4593]: E0129 12:34:46.932944 4593 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="copy" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.932950 4593 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="copy" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.933143 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bbab5d9-1d70-4e6f-ace2-a1a64d58f0df" containerName="registry-server" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.933158 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="copy" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.933176 4593 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f07111-44a8-402c-887e-fb65ab51a2ba" containerName="gather" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.934462 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.961367 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.965269 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.965321 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:46 crc kubenswrapper[4593]: I0129 12:34:46.965376 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.066910 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.067250 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.067331 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.067404 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.067688 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.091959 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") pod \"community-operators-c2lqd\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.258542 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.738407 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.939741 4593 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.942024 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:47 crc kubenswrapper[4593]: I0129 12:34:47.950290 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.108012 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.109159 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.109413 4593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.211490 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.212020 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.212237 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.213165 4593 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.213168 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.239946 4593 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") pod \"redhat-marketplace-hkpl9\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.256107 4593 generic.go:334] "Generic (PLEG): container finished" podID="092caf89-afd5-4bc4-aa5b-afa0b8583122" containerID="0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5" exitCode=0 Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.256156 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerDied","Data":"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5"} Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.256185 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerStarted","Data":"925fae481b629ccb1893d79864a8245208c10343beb67fe181c165267988eb8c"} Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.351253 4593 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:48 crc kubenswrapper[4593]: I0129 12:34:48.864261 4593 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:34:48 crc kubenswrapper[4593]: W0129 12:34:48.865689 4593 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfd19db0_a9c1_4aa7_a665_957e97ca991e.slice/crio-37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc WatchSource:0}: Error finding container 37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc: Status 404 returned error can't find the container with id 37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc Jan 29 12:34:49 crc kubenswrapper[4593]: I0129 12:34:49.276155 4593 generic.go:334] "Generic (PLEG): container finished" podID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" containerID="c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e" exitCode=0 Jan 29 12:34:49 crc kubenswrapper[4593]: I0129 12:34:49.276229 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerDied","Data":"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e"} Jan 29 12:34:49 crc kubenswrapper[4593]: I0129 12:34:49.276262 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerStarted","Data":"37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc"} Jan 29 12:34:50 crc kubenswrapper[4593]: I0129 12:34:50.287071 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerStarted","Data":"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70"} Jan 29 12:34:53 crc kubenswrapper[4593]: I0129 12:34:53.315665 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerStarted","Data":"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e"} Jan 29 12:34:53 crc kubenswrapper[4593]: I0129 12:34:53.318702 4593 generic.go:334] "Generic (PLEG): container finished" podID="092caf89-afd5-4bc4-aa5b-afa0b8583122" containerID="7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70" exitCode=0 Jan 29 12:34:53 crc kubenswrapper[4593]: I0129 12:34:53.318759 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerDied","Data":"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70"} Jan 29 12:34:54 crc kubenswrapper[4593]: I0129 12:34:54.331278 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerStarted","Data":"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b"} Jan 29 12:34:54 crc kubenswrapper[4593]: I0129 12:34:54.372502 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2lqd" podStartSLOduration=2.748048947 podStartE2EDuration="8.37247321s" podCreationTimestamp="2026-01-29 12:34:46 +0000 UTC" firstStartedPulling="2026-01-29 12:34:48.258598419 +0000 UTC m=+5754.131632610" lastFinishedPulling="2026-01-29 12:34:53.883022672 +0000 UTC m=+5759.756056873" observedRunningTime="2026-01-29 12:34:54.362791948 +0000 UTC m=+5760.235826159" watchObservedRunningTime="2026-01-29 12:34:54.37247321 +0000 UTC m=+5760.245507401" Jan 29 12:34:55 crc kubenswrapper[4593]: I0129 12:34:55.348336 4593 generic.go:334] "Generic (PLEG): container finished" podID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" containerID="214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e" exitCode=0 Jan 29 12:34:55 crc kubenswrapper[4593]: I0129 12:34:55.351109 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerDied","Data":"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e"} Jan 29 12:34:56 crc kubenswrapper[4593]: I0129 12:34:56.362122 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerStarted","Data":"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5"} Jan 29 12:34:56 crc kubenswrapper[4593]: I0129 12:34:56.385130 4593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hkpl9" podStartSLOduration=2.722455474 podStartE2EDuration="9.385111384s" podCreationTimestamp="2026-01-29 12:34:47 +0000 UTC" firstStartedPulling="2026-01-29 12:34:49.278898804 +0000 UTC m=+5755.151932995" lastFinishedPulling="2026-01-29 12:34:55.941554704 +0000 UTC m=+5761.814588905" observedRunningTime="2026-01-29 12:34:56.38127013 +0000 UTC m=+5762.254304331" watchObservedRunningTime="2026-01-29 12:34:56.385111384 +0000 UTC m=+5762.258145575" Jan 29 12:34:57 crc kubenswrapper[4593]: I0129 12:34:57.258986 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:57 crc kubenswrapper[4593]: I0129 12:34:57.259769 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:57 crc kubenswrapper[4593]: I0129 12:34:57.304470 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:34:58 crc kubenswrapper[4593]: I0129 12:34:58.352016 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:58 crc kubenswrapper[4593]: I0129 12:34:58.353900 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:34:58 crc kubenswrapper[4593]: I0129 12:34:58.399237 4593 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:35:03 crc kubenswrapper[4593]: I0129 12:35:03.946029 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:35:03 crc kubenswrapper[4593]: I0129 12:35:03.946552 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:35:07 crc kubenswrapper[4593]: I0129 12:35:07.324170 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:35:07 crc kubenswrapper[4593]: I0129 12:35:07.382689 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:35:07 crc kubenswrapper[4593]: I0129 12:35:07.465592 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2lqd" podUID="092caf89-afd5-4bc4-aa5b-afa0b8583122" containerName="registry-server" containerID="cri-o://af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" gracePeriod=2 Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.257357 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.400172 4593 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.409704 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") pod \"092caf89-afd5-4bc4-aa5b-afa0b8583122\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.411176 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") pod \"092caf89-afd5-4bc4-aa5b-afa0b8583122\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.411416 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") pod \"092caf89-afd5-4bc4-aa5b-afa0b8583122\" (UID: \"092caf89-afd5-4bc4-aa5b-afa0b8583122\") " Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.411139 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities" (OuterVolumeSpecName: "utilities") pod "092caf89-afd5-4bc4-aa5b-afa0b8583122" (UID: "092caf89-afd5-4bc4-aa5b-afa0b8583122"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.423010 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7" (OuterVolumeSpecName: "kube-api-access-wrss7") pod "092caf89-afd5-4bc4-aa5b-afa0b8583122" (UID: "092caf89-afd5-4bc4-aa5b-afa0b8583122"). InnerVolumeSpecName "kube-api-access-wrss7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477500 4593 generic.go:334] "Generic (PLEG): container finished" podID="092caf89-afd5-4bc4-aa5b-afa0b8583122" containerID="af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" exitCode=0 Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477556 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerDied","Data":"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b"} Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477588 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2lqd" event={"ID":"092caf89-afd5-4bc4-aa5b-afa0b8583122","Type":"ContainerDied","Data":"925fae481b629ccb1893d79864a8245208c10343beb67fe181c165267988eb8c"} Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477605 4593 scope.go:117] "RemoveContainer" containerID="af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.477813 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2lqd" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.480709 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "092caf89-afd5-4bc4-aa5b-afa0b8583122" (UID: "092caf89-afd5-4bc4-aa5b-afa0b8583122"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.502797 4593 scope.go:117] "RemoveContainer" containerID="7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.513141 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.513190 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/092caf89-afd5-4bc4-aa5b-afa0b8583122-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.513206 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrss7\" (UniqueName: \"kubernetes.io/projected/092caf89-afd5-4bc4-aa5b-afa0b8583122-kube-api-access-wrss7\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.527839 4593 scope.go:117] "RemoveContainer" containerID="0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.587124 4593 scope.go:117] "RemoveContainer" containerID="af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" Jan 29 12:35:08 crc kubenswrapper[4593]: E0129 12:35:08.588461 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b\": container with ID starting with af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b not found: ID does not exist" containerID="af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.588536 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b"} err="failed to get container status \"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b\": rpc error: code = NotFound desc = could not find container \"af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b\": container with ID starting with af5f41d7b6aa735c7e64450ded8184aea35e3b30170b62352d04aa8eee2dd27b not found: ID does not exist" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.588572 4593 scope.go:117] "RemoveContainer" containerID="7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70" Jan 29 12:35:08 crc kubenswrapper[4593]: E0129 12:35:08.589532 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70\": container with ID starting with 7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70 not found: ID does not exist" containerID="7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.589596 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70"} err="failed to get container status \"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70\": rpc error: code = NotFound desc = could not find container \"7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70\": container with ID starting with 7e82a241caeb9d763f0230669c4ac7dd36408cffa64f71e2fea231d72969af70 not found: ID does not exist" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.589659 4593 scope.go:117] "RemoveContainer" containerID="0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5" Jan 29 12:35:08 crc kubenswrapper[4593]: E0129 12:35:08.590069 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5\": container with ID starting with 0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5 not found: ID does not exist" containerID="0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.590118 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5"} err="failed to get container status \"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5\": rpc error: code = NotFound desc = could not find container \"0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5\": container with ID starting with 0e99862554d61960d63664a42ceb2a683f70b91d8ed18fd30cacaf30e90da0e5 not found: ID does not exist" Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.812098 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:35:08 crc kubenswrapper[4593]: I0129 12:35:08.820176 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2lqd"] Jan 29 12:35:09 crc kubenswrapper[4593]: I0129 12:35:09.091065 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092caf89-afd5-4bc4-aa5b-afa0b8583122" path="/var/lib/kubelet/pods/092caf89-afd5-4bc4-aa5b-afa0b8583122/volumes" Jan 29 12:35:10 crc kubenswrapper[4593]: I0129 12:35:10.769865 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:35:10 crc kubenswrapper[4593]: I0129 12:35:10.770571 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hkpl9" podUID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" containerName="registry-server" containerID="cri-o://4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" gracePeriod=2 Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.296507 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.320485 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") pod \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.320572 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") pod \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.320815 4593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") pod \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\" (UID: \"dfd19db0-a9c1-4aa7-a665-957e97ca991e\") " Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.321471 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities" (OuterVolumeSpecName: "utilities") pod "dfd19db0-a9c1-4aa7-a665-957e97ca991e" (UID: "dfd19db0-a9c1-4aa7-a665-957e97ca991e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.327205 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq" (OuterVolumeSpecName: "kube-api-access-nkjkq") pod "dfd19db0-a9c1-4aa7-a665-957e97ca991e" (UID: "dfd19db0-a9c1-4aa7-a665-957e97ca991e"). InnerVolumeSpecName "kube-api-access-nkjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.367368 4593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfd19db0-a9c1-4aa7-a665-957e97ca991e" (UID: "dfd19db0-a9c1-4aa7-a665-957e97ca991e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.422822 4593 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkjkq\" (UniqueName: \"kubernetes.io/projected/dfd19db0-a9c1-4aa7-a665-957e97ca991e-kube-api-access-nkjkq\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.422867 4593 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.422879 4593 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfd19db0-a9c1-4aa7-a665-957e97ca991e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.511896 4593 generic.go:334] "Generic (PLEG): container finished" podID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" containerID="4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" exitCode=0 Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.511953 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerDied","Data":"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5"} Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.511996 4593 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkpl9" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.512035 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkpl9" event={"ID":"dfd19db0-a9c1-4aa7-a665-957e97ca991e","Type":"ContainerDied","Data":"37a9227090be4488b450bcbf68a9eb08d9ba234d201cf73e5fe50319f24163dc"} Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.512061 4593 scope.go:117] "RemoveContainer" containerID="4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.541899 4593 scope.go:117] "RemoveContainer" containerID="214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.557871 4593 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.594386 4593 scope.go:117] "RemoveContainer" containerID="c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.605008 4593 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkpl9"] Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.622979 4593 scope.go:117] "RemoveContainer" containerID="4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" Jan 29 12:35:11 crc kubenswrapper[4593]: E0129 12:35:11.623550 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5\": container with ID starting with 4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5 not found: ID does not exist" containerID="4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.623585 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5"} err="failed to get container status \"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5\": rpc error: code = NotFound desc = could not find container \"4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5\": container with ID starting with 4287add40041bf91d6ad0a5de239ea992b63c42f8b800bb41aa810181161a9a5 not found: ID does not exist" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.623608 4593 scope.go:117] "RemoveContainer" containerID="214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e" Jan 29 12:35:11 crc kubenswrapper[4593]: E0129 12:35:11.623881 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e\": container with ID starting with 214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e not found: ID does not exist" containerID="214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.623917 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e"} err="failed to get container status \"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e\": rpc error: code = NotFound desc = could not find container \"214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e\": container with ID starting with 214d60fd41eae04a94f31d07eb3bb60c158fa46d3892b0f1769ba4ba59e7194e not found: ID does not exist" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.623949 4593 scope.go:117] "RemoveContainer" containerID="c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e" Jan 29 12:35:11 crc kubenswrapper[4593]: E0129 12:35:11.624457 4593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e\": container with ID starting with c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e not found: ID does not exist" containerID="c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e" Jan 29 12:35:11 crc kubenswrapper[4593]: I0129 12:35:11.624501 4593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e"} err="failed to get container status \"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e\": rpc error: code = NotFound desc = could not find container \"c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e\": container with ID starting with c37eb1712e814a3c862e1d1e8797c89651982d35a646d9cb0ec9148ed8453b9e not found: ID does not exist" Jan 29 12:35:13 crc kubenswrapper[4593]: I0129 12:35:13.088041 4593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd19db0-a9c1-4aa7-a665-957e97ca991e" path="/var/lib/kubelet/pods/dfd19db0-a9c1-4aa7-a665-957e97ca991e/volumes" Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.945666 4593 patch_prober.go:28] interesting pod/machine-config-daemon-p4zf2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.946225 4593 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.946290 4593 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.947100 4593 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f"} pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:35:33 crc kubenswrapper[4593]: I0129 12:35:33.947188 4593 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" podUID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerName="machine-config-daemon" containerID="cri-o://f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f" gracePeriod=600 Jan 29 12:35:34 crc kubenswrapper[4593]: I0129 12:35:34.746537 4593 generic.go:334] "Generic (PLEG): container finished" podID="5eed1f11-8e73-4894-965f-a670f6c877b3" containerID="f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f" exitCode=0 Jan 29 12:35:34 crc kubenswrapper[4593]: I0129 12:35:34.746585 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerDied","Data":"f8b1c574af947fa11ffe9b5caa5a417f8805b37c95e5b710480d0cd19a6f323f"} Jan 29 12:35:34 crc kubenswrapper[4593]: I0129 12:35:34.746981 4593 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p4zf2" event={"ID":"5eed1f11-8e73-4894-965f-a670f6c877b3","Type":"ContainerStarted","Data":"6b515c98bc904e1b309f647418f96aa9ffe74921bccaa9ccb23cdbcb47a4d89e"} Jan 29 12:35:34 crc kubenswrapper[4593]: I0129 12:35:34.747030 4593 scope.go:117] "RemoveContainer" containerID="c509826531425491a9307e1314bf997093d1f7b98e3d0a1e5112bf14dda1d72f" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136652103024447 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136652103017364 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136636116016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136636116015465 5ustar corecore